Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For all my constant freak-outs about AI in general, it turned out to be a godsend last year when my wife’s mom was hospitalized (and later passed away a few weeks afterward). Multimodal ChatGPT had just become available on mobile, so being able to feed it photos of her vital sign monitors to figure out what was going on, have it translate what the doctors were telling us in real time, and explain things clearly made an incredible difference. I even used it to interpret legal documents and compare them with what the attorneys were telling us — again, super helpful.

And when the bills started coming in, it helped there too. Hard to say if we actually saved anything — but it certainly didn’t hurt.



Doubters say it's not as accurate or could hallucinate. But the thing about hiring professionals is that you have to blindly trust them because you'd need to have a professional level of knowledge to qualify who is competent.

LLMs are a good way to double check if the service you're getting is about right or steer them onto the right hypothesis when they have some confirmation bias. This assumes that you know how to prompt it with plenty of information and open questions that don't contain leading presuppositions.

An LLM read my wife's blood lab results and found something the doctor was ignoring.


All these things are language parsing and transforming. That's the kind of thing llms are good at.


And statistical modelling. LLM's must have a weights that associate _seen X chemical high, so Y condition likely follows in the documents I've read_.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: