reminds me of the YouTube ads I get that are like "Warning: don't do this new weight loss trick unless you have to lose over 50 pounds, you will end up losing too much weight!". As if it's so effective it's dangerous.
I remain convinced the steady steam of OpenAI employees who allegedly quit because AI was "too dangerous" for a couple months was an orchestrated marketing campaign as well.
Ilya Sutskever out there as a ronin marketing agent, doing things like that commencement address he gave that was all about how dangerously powerful AI is
I just had 5.1 do something incredibly brain dead in "extended thinking" mode because I know what I asked it is not in the training data. So it just fudged and made things up because thinking is exactly what it can not do.
It seems like LLMs are at the same time a giant leap in natural language processing, useful in some situations and the biggest scam of all time.
> a giant leap in natural language processing, useful in some situations and the biggest scam of all time.
I agree with this assessment (reminds of bitcoin frankly), possibly adding that the insights this tech gave us into language (in general) via the embedding hi-dim space is a somewhat profound advance in our knowledge, besides the new superpowers in NLP (which are nothing to sniff at).