Maybe personal computing is entering a slow decline.
Rising RAM prices over the next couple of years will make new computers and phones harder to justify, so people will keep devices longer. At the same time, Microsoft and Apple (et alia) continue shipping more demanding software packed with features no users asked for. Software growth has long driven hardware upgrades, but if upgrades no longer feel worthwhile, the feedback loop breaks. The question is whether personal computing keeps its central role, or quietly becomes legacy infrastructure people replace only when forced? In that case: What is the next era?
It seems to me that what your saying isn't the personal computing is entering a slow decline but the PC market is. If people continue to use PCs they already own then personal computing is alive and well.
You cannot pick and choose one or two variables and then claim representativeness based on a numerical match. The first step is to identify the confounding variables that are likely to influence the outcome. Only after those are specified a comparison set can be defined and matching or adjustment criteria applied. Without that process, agreement on a small number of aggregate measures does not establish that the underlying populations or mechanisms are comparable.
I'll concede this, however in large-scale demographic data, when the central tendencies of two populations align so closely, it is statistically unlikely that their underlying distributions are radically different. It puts the burden of proof on the idea that Ohio is somehow an outlier, rather than the idea that it's a standard sample. Otherwise, were we to attempt to account for every confounding variable, we would be letting the perfect be the enemy of the good.
Paraphrasing: We are setting aside the actual issue and looking for a different angle.
To me this reads as a form of misdirection, intentional or not. A monopolist has little reason to care about downstream effects, since customers have nowhere else to turn. Framing this as roll your own versus Cloudflare rather than as a monoculture CDN environment versus a diverse CDN ecosystem feels off.
That said, the core problem is not the monopoly itself but its enablers, the collective impulse to align with whatever the group is already doing, the desire to belong and appear to act the "right way", meaning in the way everyone else behaves. There are a gazillion ways of doing CDN, why are we not doing them? Why the focus on one single dominant player?
I don’t the answer to the all questions. But here I think it is just a way to avoid responsibility. If someone choses CDN “number 3” and it goes down, business people *might* put a blame on this person for not choosing “the best”.
I am not saying it is a right approach, I just seen it happens too many times.
You are talking about the circular investments in the segment? Yes, but assume NVIDIA can get cheap access to IP and products of failing AI unicorns through contracts, this does not mean the LLM business can be operated profitably by them. Models are like fresh food, they start to rot by the training cut off date and lose value. The process of re-training a model will always be very expensive.
Nothing points out that the benchmark is invalid like a zero false positive rate. Seemingly it is pre-2020 text vs a few models rework of texts. I can see this model fall apart in many real world scenarios. Yes, LLMs use strange language if left to their own devices and this can surely be detected. 0% false positive rate under all circumstances? Implausible.
Max, there's two problems I see with your comment.
1) the paper didn't show a 0% FNR. I mean tables 4, 7, and B.2 are pretty explicit. It's not hard to figure out from the others either.
2) a 0% error rate requires some pretty serious assumptions to be true. For that type of result to not be incredibly suspect requires there to be zero noise in the data, analysis, and at all parts. I do not see that being true of the mentioned dataset.
Even high scores are suspect. Generalizing the previous a score is suspect if it is higher than the noise level. Can you truly attest that this condition is true?
I'm suspect that you're introducing data leakage. I haven't looked enough into your training and data to determine how that's happening but you'll probably need a pretty deep analysis as leakage is really easy to sneak in. It can do so in non obvious ways. A very common one is tuning hyper parameters on test results. You don't have to pass data to pass information. Another sly way for this to happen is that the test set isn't significantly disjoint from the training set. If the perturbation is too small then you aren't testing generalization you're testing a slightly noisy training set (which your training should be introducing noise to help regularize, so you end up just measuring your training performance).
Your numbers are too good and that's suspect. You need a lot more evidence to suggest they mean what you want them to mean.
Interesting. A more dystopian take is that human interaction will be limited to a privileged class. The unprivileged will ”have to do” with machine interaction. It’s not really something new. Automation lowers cost. But it the perpetual growth economy there also needs to be more profit, which leaves charging more and/or lowering cost even more as the only viable options. See where this is going? As with food, the unprivileged will have to do with energy rich, low quality, mass produced interaction. Music is the next/current frontier. The privileged will seek out in person events, while the unprivileged will ”have to do” with energy rich, low quality, mass produced machine made culture.
Sounds more like an upcoming revolution than a bubble popping then. Makes me kind of happy! My fear is that AI is and will be used to keep people content and passive.
When you work for a big corp and someone asks you to have a conversation like this where there is no upside for you, one of the best things you can do is copy the lawyers in and nope out of there as soon as you can.
reply