>The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.
>Giving it a document and asking it about edge cases or things that may be not covered in the document.
As an attorney, how am I supposed to trust that it gave a proper output on the edge cases without reading the document myself?
>Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
Do people think attorneys don't know how to do their day-to-day jobs? We generally do not have issues coming up with how to argue against a pleading. Maybe if you're some sort of small-time generalist, working on an issue you hadn't before, but that's not most attorneys. And then, I'd be worried. You are basically not capable of having the expertise needed to verify the model's output for correctness anyway. This is why attorneys work in networks. I'd just find a colleague or a network of attorneys specializing in that area and find out from them what is needed, rather than trusting that an LLM knows all that because it was digested from the entire public Internet.
I've said it here before too, I think people talking about using AI as an attorney don't really understand what attorneys do all day.
>Giving it a document and asking it about edge cases or things that may be not covered in the document.
As an attorney, how am I supposed to trust that it gave a proper output on the edge cases without reading the document myself?
>Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
Do people think attorneys don't know how to do their day-to-day jobs? We generally do not have issues coming up with how to argue against a pleading. Maybe if you're some sort of small-time generalist, working on an issue you hadn't before, but that's not most attorneys. And then, I'd be worried. You are basically not capable of having the expertise needed to verify the model's output for correctness anyway. This is why attorneys work in networks. I'd just find a colleague or a network of attorneys specializing in that area and find out from them what is needed, rather than trusting that an LLM knows all that because it was digested from the entire public Internet.
I've said it here before too, I think people talking about using AI as an attorney don't really understand what attorneys do all day.