That’s not what they’re trained on though and not what they’re doing.
When you ask a general LLM like ChatGPT “I have a headache what is the cause” it’s going to give you the most statistically likely response in it’s training data. That might coincide with the most likely cause, but it’s also very possible it doesn’t. Especially for problems with widespread misconceptions.