Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can be a bear and still think AI will be big one day. It's quite plausible that LLMs will remain limited and we don't find anything better for decades and the stocks crash. But saying AI will never be a big thing is just unrealistic.




I think we should split definition somehow, between what LLMs can do today (or next few years) with how big a thing this particular capability can be (a derivative of the capability). And then what some future AI could do and with how big a thing that future capability could be.

I regularly see people who distinguish between current and future capabilities, but then still lump societal impact (how big a thing could be) into one projection.

The key bubble question is - if that future AI is sufficiently far away (for example if there will be a gap, a new "AI winter" for a few decades), then does this current capability justify the capital expenditures, and if not then by how much?


Yeah, and how long can OpenAI etc. hang on without making profits.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: