If the transcript is accurate, Karpathy does not actually ever, in this interview, say that AGI is a decade away, or make any concrete claims about how far away AGI is. Patel's title is misleading.
Hmm good point. I skimmed the transcript looking for an accurate, representative quote that we could use in the title above. I couldn't exactly find one (within HN's 80 char limit), so I cobbled together "It will take a decade to get agents to work", which is at least closer to what Karpathy actually said.
If anyone can suggest a more accurate and representative title, we can change it again.
Edit: I thought of using "For now, autocomplete is my sweet spot", which has the advantage of being an exact quote; but it's probably not clear enough.
Edit 2: I changed it to "It will take a decade to work through the issues with agents" because that's closer to the transcript.
Anybody have a better idea? Help the cause of accuracy out here!
It's a good suggestion, but where the 'autocomplete' quote is scoped too narrowly, this one is maybe scoped too broadly. Neither really represent what the article is about.
Oh that's clear, and the submitter didn't do anything wrong. It's just that on HN the idea is to find a different title when the article's own title is misleading or linkbait (https://news.ycombinator.com/newsguidelines.html).
The best way to do that of course is to find a more representative phrase from the article itself. That's almost always possible but I couldn't quite swing it in this case.
dang!! I have so much respect for this ironic situation where we are discussing the superpowers of AI while a very human, very decent being ponders deeply on how to compose a few words to make a suitable title.
Please can we have a future world where such events can always happen every so often.
>They don't have enough intelligence, they're not multimodal enough, they can't do computer use and all this stuff. They don't do a lot of the things you've alluded to earlier. They don't have continual learning. You can't just tell them something and they'll remember it. They're cognitively lacking and it's just not working.
>It will take about a decade to work through all of those issues. (2:20)
"The scalable method is you learn from experience. You try things, you see what works. No one has to tell you. First of all, you have a goal. Without a goal, there’s no sense of right or wrong or better or worse. Large language models are trying to get by without having a goal or a sense of better or worse. That’s just exactly starting in the wrong place."
and a bunch of similar things implying LLMs have no hope of reaching AGI
Please don't cross into personal attack. It's not what this site is for, and destroys what it is for.
Edit: please don't edit comments to change their meaning once someone has replied. It's unfair to repliers whose comments no longer make sense, and it's unfair to readers who can no longer understand the thread. It's fine, of course, to add to an existing comment in such a case, e.g. by saying "Edit:" or some such and then adding what else you want to say.