Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’d say that an increasingly more common strand is that the way LLMs work is so wildly different than how we humans operate that it is effectively an alien intelligence pretending to be human. We have never and still don’t fully understand why LLMs work the way they do.

I’m of the opinion that AGI is an anthropomorphizing of digital intelligence.

The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves.

If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: