Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

None of those models can learn continuously. LLMs currently can't add to their vocabulary post training as AGI would need to. That's a big problem.

Before anyone says "context", I want you to think on why that doesn't scale, and fails to be learning.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: