Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is wrong. Others in this thread are noticing a change in ChatGPT's behavior for first-party medical advice.


But OpenAI's head of Health AI says that ChatGPT's behavior has not changed: https://xcancel.com/thekaransinghal/status/19854160578054965... and https://x.com/thekaransinghal/status/1985416057805496524

I trust what he says over general vibes.

(If you think he's lying, what's your theory on WHY he would lie about a change like this?)


Also possible: he's unaware of a change implemented elsewhere that (intentionally or unintentionally) has resulted in a change of behaviour in this circumstance.

(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)


My theory is that he believes 1) people will trust him over what general public say, and 2) this kind of claim is hard to verify to prove him wrong.


That doesn't answer why he would lie about this, just why the thinks he would get away with it. What's his motive?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: