Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Boomers in the manager class love AI because it sells the promise of what they've longed for for decades: a perfect servant that produces value with no salary, no need for breaks, no pushback, no workers comp suits, etc.

The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.

There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.



It still kinda sucks though. You can make it work, but you can also easily end up wasting a huge amount of time trying to make it do something that it's just incapable of. And it's impossible to know upfront if it will work. It's more like gambling.

I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.


> recently the best AI models are veering into not sucking territory

I agree with your assessment.

I find it absolutely wild that 'it almost doesn't entirely suck, if you squint' is suddenly an acceptable benchmark for a technology to be unleashed upon the public.

We have standards for cars, speakers, clothing, furniture, make up, even literature. Someone can't just type up a few pages of dross and put it though 100 letterboxes without being liable for littering and nuisance. The EU and UK don't allow someone to still phones with a pre-imstalled app that almost performs a function that some users might theoretically want. The public domain has quality standards.

Or rather, it had quality standards. But it's apparently legal to put semi-functioning data-collectors in technologies where nobody asked for them, why isn't it legal to sell chairs that collapse unless you hold them a specific way, clothes that don't actually function as clothes but could be used to make actual clothes by a competent tailor, headphones that can be coaxed into sporadically producing round for minutes at a time?

Either something works too a professional standard or it doesn't. If it doesn't, it is/was not legal to include it in consumer products.

This is why people are more angry than is justified by a single unreliable program. I don't care that much whether LLM's perform the functions that are advertised (and they don't, half the time). I care that after many decades of living in a first world country with consumer protection and minimum standards, all of that seems to have been washed away in the AI wave. When it receeds, we will be left paying first world prices for third world enquiring, now the acceptable quality standard for everything seems to have dropped to 'it can almost certainly be used for its intended purpose at least some times, by some people, with a little effort'.


It's not a perfect servant by any means. And let's drop the generation game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: