> it did not do negotiating and it doesn't seem like the accuracy of its understanding of medicare practices was actually checked. The author reasonably accused the hospital of gouging and the hospital came back with a much lower offer.
Im increasingly of the opinion that AI gives people more confidence than insight. The author probably could have just thought of the same or similar things to assert to the hospital and gotten the same result. However, he wouldn't have necessarily though his assertions would be convincing, since he has no idea whats going on. AI doesn't either, but it seems like it does.
I've found LLMs helpful for figuring out what I don't know, then I can go and look up how those things work, again together with an LLM.
But in the past, once I got to the point where I know I could maybe do something about it, but not exactly what, and I don't know any of the domain words used, you got pretty much stuck unless you asked other people, either locally or on the internet.
At least now I can explore what I don't know, and decide if it's relevant or not. It's really helpful when diving into new topics, because it gives you a starting point.
I would never send something to a real human that a LLM composed without me, I still want to write and decide everything 100% myself, but I use more LLMs as a powerful search engine where you can put synonyms or questions and get somewhat fine answers from it.
Absolutely. It's cheap (as far as the user is concerned) to just fire off a question. And it can even be really fuzzy/ambiguous/ill-defined sometimes. It's a great starting point.
"But fight with knowledge. My $20/month subscription to Claude more than paid for itself. Yes, AI assistants can hallucinate and give you garbage. So I didn’t rely on it. I spot checked by looking up its big findings myself and found it was right. I also had ChatGPT, to which I subscribed for one month just to do this, read the letter and fact check it. No notes."
Im increasingly of the opinion that AI gives people more confidence than insight. The author probably could have just thought of the same or similar things to assert to the hospital and gotten the same result. However, he wouldn't have necessarily though his assertions would be convincing, since he has no idea whats going on. AI doesn't either, but it seems like it does.