most places in the world theres little sim lock, there's mandated number portability, and there are regulations from marking up the price of the phone to artificially make people breaking contract early to pay a lot. ...and there's the usa.
It would be nice if security updates were more friendly, and not bundled with feature updates (ever). Having to go through a whole song-and-dance, only for the OTA update to fail for one reason or another: not endearing. Worst of all: they stop coming just when you've become really familiar with your phone, after a few years.
Regardless of whether lightning is better than USB-C or vice versa, I am more worried about the general principle. The only organization that has a track record of building smartphones that I enjoy using is Apple. It's certainly not the EU. Thus I want Apple to be able to make design decisions as freely as possible because in practice that seems to result in a good product. I'd make an exception for things that are OBVIOUSLY blatantly uncompetitive (refusing to support modern interoperable replacements for SMS might be in this category, but I don't think what charing cable you use is).
Lightning was/is the most blatantly money grabbing technology ever.
The reason Apple stuck to lightning is not because they liked the design, or because it was superior (it wasn't). It's because you have to license the connector to use it. Every single lightning cable sold, whether it be from Apple or not, Apple made a pretty penny.
Running such a racket at the expense of consumers is bad. Someone had to step in, because it was becoming increasingly obvious Apple had 0 intention of ramping down the money grubbing.
Samsung only offers the 7 years of support for their flagship series of phones. The majority of phones they sell are cheaper ones that do not get that support.
This argument would only make any sense if you believed that the reason Americans were buying from Apple and Samsung was for their software update policy, which I'm pretty sure you don't.
I think you could argue people are buying them for longevity, and these measures increase their perceived longevity.
I know a lot of iPhone users who will only buy iPhones because of their longevity. They'll keep them for many generations, and they simply don't have faith in Android phone brands to keep up that long. Phones are large purchases, sometimes financed, like a car. So when it comes to buying, some consumers treat them like cars. They weight reliability and longevity very high, and they'll often buy slightly used phones. A generation or two old iPhone is really still top of the line.
A lot of that is marketing perception but there's also some truth to it.
because they are alowed to invest in the market they are supposed to regulate.
why do you thin icahn at the time trashed online ad companies he had stake in? sinking that money on number 3-5 whan you own much more return on consolidatig 1 and 2?
yall nerd spniping the example and missing the point that ofered it.
the elevator example, the poster was giving chatbots the same excuse for mistakes as a person.
imagine if elevators could just make mistakes and damage people, because well, a human would too, never minda that its very much trivial to design elevators with sensors in the correct place once and then they are accident free! this is the ridiculous world ai apologists must rely on...
I'm playing a bit of both sides here. I do think it's interesting that we so automatically feel like the cases are different. I used something old because I think we understand it well and I do think in the elevator case our instincts are pretty justified. The fact that we can add sensors and get near 100% reliability is a big part of why in that case it isn't very reasonable, but ML is statistical. It's not the kind of thing that you fix by adding one more sensor or one more if statement. I think some anti ML people use that to mean it's unworkable, but I'd hate to hold off on replacing drivers for example just because the kind of errors that a robo taxi makes feel more like in theory they would have been avoidable with better training while we just go and forgive drivers for letting their mind wander for a second
Everything is statistical. The explicitly defined systems are understandable and understood, but can also be brittle[0]; they do make it easier to put probabilities on failure scenarios, but those probabilities are never 0. ML systems are more like real people. They're unpredictable and prone to failures, fixing any one of which often creating a problem elsewhere - but with enough fixing, you can push the probability of failure down to a number low enough that you no longer care.
Compare: probabilistic methods of primality checking (which is where I first understood this idea). Theoretically, they can give you the wrong result sometimes; in practice, they're constructed in such a way that you can push the probability of error to arbitrarily low levels.
See also: random UUIDs, hashing algorithms - all are prone to collisions, but have knobs you can turn to push the probability of collision to somewhere around "not before heat death of the universe" or thereabouts.
This is the kind of approach we'll need with ML methods: accepting they can be randomly wrong, but developing them in ways that allow us to control the probability of error.
--
[0] - In theory, you can ensure your operating envelope is large enough to cover pretty much anything that could reasonably go wrong; in practice, having a clear-cut operating envelope also creates a pressure to shrink it to save money (it can be a lot of money), which involves eroding what "reasonably" means.
see the whole problem is incopetence apologists like yourself.
we cannot get close to 100pct on elevator safety. we can get 100pct, and cheaply! the exceptions are because of 1) criminal incopetence, and/or 2) patents forcing people to do things worse and/or 3) malice.
I take it you never worked at a cheap/local/small software shop, usually associated with an advertising agency.
because I would rather those fill-in-the-blank-forced-prompts that just add form fields and obviously broken business logic to a generic template they curate.
gitshell is a opinionated msys2 install. they just picked one of the variants and removed the package manager.
this could serve as a wake up call to msys2 maintainers. every time i install it, i have absolutely no f ideia what the 6 variant icons mean and I couldn't care less since all of them mostly work... (thankfully I don't use windows much to reach a point the difference matters?) it's very overdue they just pick one and keep the others with better names in a compatibility folder or something. also, a better name.
If you’re not sure, you probably want UCRT64. At any rate I used it to pull a rabbit out of a hat at work by building some software with it that didn’t have an official windows build.
Building in the URCRT64 environment means the executables wont run on versions of Windows older than 10 without extra runtime support, I think. So that's something to watch out for.
it's the year of the lord 2024 and OSes still don't ship with a common set of open license fonts with most of unicode like nerdfonts. shameful.
I wish all the effort the big four wasted fighting for emoji supremacy would have been used to normalize a decent set of full unicode fonts once and for all.
Wow, is Inter ever a beautiful typeface. A rare find when there seemed to be a glut of corporate NIH typefaces for a while.
macOS used to ship with a beautiful Hebrew typeface that was like the open-source font Shofar, but it seems to have been removed. I do not see similar Hebrew letters in one of the remaining typefaces. I imagine that it is not easy to give a typeface a consistent feel, in all Latin and non-Latin character sets, to the fluent readers of those languages.
The number of fonts that include even two good alphabets is pretty low. I have run into needing english and greek or cyrillic in the same document. I know of like three good fonts that have a good english and a good greek. My eye isn't as practiced at looking at cyrillic fonts but it seems even rarer for that combination.
Which honestly makes sense and there may not be a solution for. Different alphabets and writing systems have their own typographical histories and conventions. It's reasonable that there is a very limited design space where you're adhering to the conventions of two separate systems as well as maintaining interior consistency.
English is a language, Cyrillic is an alphabet, and Greek is both. The alphabet one finds in the range U+0000 to U+007F is called "Latin", not "English". If any alphabet has a claim on being "English", it's Shavian. (Found at U+10450 to U+1047F.)
Inter quickly became my go-to sans serif font for the web. In A/B tests it somehow always looks better than anything, frequently by wide margins—which is saying something when you’re basically running through a dozen variations on Helvetica.
Can't expect the big four to use their immeasurable wealth for the common good, now can we?
If anyone, then distro creators and maintainers will have to make that step. Google surely enjoys getting so much information due to hapless users loading Googlefonts on websites.
In some important areas France has better laws than other EU countries. Another example I believe was about avoiding wasting food on the supply chain of food to supermarkets and what happens with food afterwards.
Proprietary OS problem. Linux distributions package and distribute all the fonts they're legally allowed to ship. Every font I ever cared to use can be found in the repositories. And they don't strip out features either. I get to customize everything about Fira Code.
> Every font I ever cared to use can be found in the repositories.
If you have to download/install them separately then it means it doesn't ship with it. The fact that you can install it from the distro repositories means nothing to a third party website that can't assume you have done so.
Websites can't really assume anything that's not a web standard. If this is important to them, they should be lobbying the web standards bodies to come up with a list of fonts that browsers are required to make available. Then the browsers will pull them in as dependencies when they're installed.
The Linux distributions did their jobs with excellence. The fonts are all there and installing them is a oneliner. The web folks made their own OS though so if anyone is to be blamed it's whoever is in charge of that OS called the web platform.
Who's in charge of the web as a platform these days? Google. Who is no doubt profiting from the Google Fonts thing.
correct. that's why I mention the nerdfonts as the holy grail.
but without the big OSes agreeing on a new unicode, today it's technically impossible to have a single unicode font (see how noto ships 200 variants on linux, all having some 500mb and only differing a few chars from one another. it's a monstrosity)
Noto doesn't ship with OS, and users need multiple fonts for different use cases.
Grandparent comment is saying that Microsoft, Google, Apple could settle on a common set of open licence fonts and bundle them with the OS (and Linux distros / other OSS OSes could also do the same). Then web design & dev could rely on those fonts without having to locally serve them, or embed with Google fonts, etc. Noto could indeed be one of the bundled fonts in this alternate reality.
But no real incentive for any of those big players to do so, and disincentive for Google who gain surveillance data from font embeds as noted elsewhere in thread.
Although if you're not too picky about the finer details or being perfectly consistent across every platform, the system fonts are generally good enough nowadays to put together a pretty decent stack without having to resort to serving web fonts.