Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional:

https://openai.com/en-GB/policies/usage-policies/

  Your use of OpenAI services must follow these Usage Policies:

    Protect people. Everyone has a right to safety and security. So you cannot use our services for:

      provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional


It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.

Obviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice.


>It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.

While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!"

If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted?


At times the advice is genuinely helpful. However, it's practically impossible to measure under what exact situations the advice would be accurate.


I think ChatGPT is capable of giving reasonable medical advice, but given that we know it will hallucinate the most outlandish things, and its propensity to agree with whatever the user is saying, I think it's simply too dangerous to follow its advice.


And it’s not just lab tests and bloodwork. Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.

They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.


> They poke, they prod, they manipulate, they look, listen, and smell.

Rarely. Most visits are done in 5 minutes. The physician that takes their time to check everything like you claim almost does not exist anymore.


Here in Canada ever since COVID most "visits" are a telephone call now. So the doctor just listens your words (same as a text input to an LLM) and orders tests (which can be uploaded to an LLM) if they need.


For a good 90% of typical visits to doctors this is probably fine.

The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims.

Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even.


> telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done"

I'm not sure this is true.


Again, it's not that all telehealth doctors are great at this, it's that LLMs are awful at caving in to saying something with warnings the reader will opt to ignore instead of being adamant things are just too uncertain to say anything of value when continually prompted.

This is largely because an LLM guessing an answer is rewarded more often than just not answering, which is not true in the healthcare profession.


I follow the logic, I'm just not sure the claim is right.


LLMs almost never reply with I don’t know. There’s been mountains of research as to why this is, but it’s very well documented behavior.

Even in the rare case where an LLM does reply with I don’t know go see your doctor, all you have to do is ask it again until you get a response you want.


That depends entirely on what the problem is. You might not get a long examination on your first visit for common complaint with no red flags.

But even then just because you don’t think they are using most of their senses, doesn’t mean they aren’t.


It depends entirely on the local health care system and your health insurance. In germany for example it comes in 2 tiers. Premium or standard. Standard comes with no time for the patient. (Or not even being able to get a appointment)


I don’t know anything about German healthcare.

In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP.


> Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.

Sometimes. Sometimes they practice by text or phone.

> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.

If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.


> Sometimes. Sometimes they practice by text or phone.

For very simple issues. For anything even remotely complicated, they’re going to have you come in.

> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.

It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.


Agreed, but I'm sure you can see why people prefer the infinite patience and availability of ChatGPT vs having to wait weeks to see your doctor, see them for 15 minutes only to be referred to another specialist that's available weeks away and has an arduous hour long intake process all so you can get 15 minutes of their time.


ChatGPT is effectively an unlimited resource. Whether doctor’s appointments take weeks or hours to secure, ChatGPT is always going to be more convenient.

That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire.


So ask it what blood tests you should get, pay for them out of pocket, and upload the PDF of your labwork?

Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way.


Exactly. One of my children lives in a country where you can just walk in to a lab and get any test. Recently they were diagnosed by a professional of a disease which chatgpt had already diagnosed before they visited the doctor. So, we were kind of prepared to ask more questions when the visit happened. So I would say chatgpt did really help us.


That makes sense. ChatGPT helped by providing orientation advice and guidance regarding your children's medical condition. After that, however, you visited a doctor who is taking responsibility for the next steps. This is the ideal scenario.

AI can give you whatever information, be it good or wrong. But it takes zero responsibility.


IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you).


IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else.

I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:

Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:

From the Usage Policies (effective October 29 2025):

“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”

From the Service Terms:

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.


> you can ask for medical advice, you just can't use the medical advice without consulting a medical professional

Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end...


Would be interested to hear a legal expert weigh in on what 'advice' is. I'm not clear that discussing medical and legal issues with you is necessarily providing advice.

One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.


Is there anything special regarding ChatGPT here?

I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.


The important terms here are "provision" and "without appropriate involvement by a licensed professional".

Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether.


> such as legal or medical advice, without appropriate involvement by a licensed professional

Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming?


CYA move. If some bright spark decides to consult Dr. ChatGPT without input from a human M.D., and fucks their shit up as a result, OpenAI can say "not our responsibility, as that's actually against our ToS."


I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability.


What’s illegal is practicing medicine. Giving medical advice can be “practicing medicine” depending on how specific it is and whether a reasonable person receiving the advice thinks you have medical training.

Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: