Nobody would use these services for anything important if they _actually understood_ these glorified markov chains are just as likely to confidently assert something false and lie about it when pressed as they are to produce accurate information.
These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.
Isn't statistical analysis a legitimate tool for helping diagnosis since forever? It's not exactly surprising that a pattern matcher does reasonably well at matching symptoms to diseases.
That’s not what they’re trained on though and not what they’re doing.
When you ask a general LLM like ChatGPT “I have a headache what is the cause” it’s going to give you the most statistically likely response in it’s training data. That might coincide with the most likely cause, but it’s also very possible it doesn’t. Especially for problems with widespread misconceptions.
These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.