Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe I’m showing my roots in the symbolic AI era here, but for me it’s even simpler: the current tech hasn’t put us significantly closer to AGI because it’s still missing some key elements.

Hofstadter was on to something with his “strange loops” idea. He never really defined it well enough to operationalize, but I have a hard time faulting him for that because it’s not like anyone else has managed to articulate a less wooly theory. And it’s true that we’ve managed to incorporate plenty of loops into LLMs, but they are still pretty light on strangeness.

For example, something brains do that I suspect is essential to biological intelligence is self-optimization: they can implement processes using inefficient-but-flexible conscious control, and then store information about that process in fast but volatile chemical storage, and then later consolidate that learning and transfer it to physically encoded neural circuits, and even continue to optimize those physical circuits over time if they get signals to indicate that doing so would be worth the space and energy cost.

Comparing that kind of thing to how LLMs work, I come away with the impression that the technology is still pretty primitive and we’re just making up for it with brute force. Kind of the equivalent of getting to an output of 10,000 horsepower by using a team of 25,000 horses.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: