Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is the most likely cause of this set of facts is how diagnostics works. LLMs are tailor made for this type of use case.


That’s not what they’re trained on though and not what they’re doing.

When you ask a general LLM like ChatGPT “I have a headache what is the cause” it’s going to give you the most statistically likely response in it’s training data. That might coincide with the most likely cause, but it’s also very possible it doesn’t. Especially for problems with widespread misconceptions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: