The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.
Giving it a document and asking it about edge cases or things that may be not covered in the document. Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
In my on cases (writing short fiction), having it act as an editor and identifying grammatical mistakes, contradictory statements, ambiguous sentences, and tone mismatch for a given character has been very helpful... but I don't have it write the short fiction for me.
---
For software where it may be used to generate some material (write a short function that does...) the key is short. Something that I can verify and reason about without too much effort.
However, changes that are of the scope of hundreds of lines are exhausting to review no matter if an LLM or a junior dev wrote them. I would expect that similar things would be the case of several paragraphs or pages of legalese that would need additional levels of reading and reasoning and verifying.
If its too much to reason about and verify - its asking too much.
I'd no more trust an LLM to find citations to cases than I'd trust it to program a lesser known framework (where they've been notorious for hallucinating up functions that don't exist).
>The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.
>Giving it a document and asking it about edge cases or things that may be not covered in the document.
As an attorney, how am I supposed to trust that it gave a proper output on the edge cases without reading the document myself?
>Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
Do people think attorneys don't know how to do their day-to-day jobs? We generally do not have issues coming up with how to argue against a pleading. Maybe if you're some sort of small-time generalist, working on an issue you hadn't before, but that's not most attorneys. And then, I'd be worried. You are basically not capable of having the expertise needed to verify the model's output for correctness anyway. This is why attorneys work in networks. I'd just find a colleague or a network of attorneys specializing in that area and find out from them what is needed, rather than trusting that an LLM knows all that because it was digested from the entire public Internet.
I've said it here before too, I think people talking about using AI as an attorney don't really understand what attorneys do all day.
Giving it a document and asking it about edge cases or things that may be not covered in the document. Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
In my on cases (writing short fiction), having it act as an editor and identifying grammatical mistakes, contradictory statements, ambiguous sentences, and tone mismatch for a given character has been very helpful... but I don't have it write the short fiction for me.
---
For software where it may be used to generate some material (write a short function that does...) the key is short. Something that I can verify and reason about without too much effort.
However, changes that are of the scope of hundreds of lines are exhausting to review no matter if an LLM or a junior dev wrote them. I would expect that similar things would be the case of several paragraphs or pages of legalese that would need additional levels of reading and reasoning and verifying.
If its too much to reason about and verify - its asking too much.
I'd no more trust an LLM to find citations to cases than I'd trust it to program a lesser known framework (where they've been notorious for hallucinating up functions that don't exist).