Whether you’re an early adopter or an avowed Luddite, artificial intelligence is here, and it is already impacting the practice. And with the 2021 update to Cal. Rules of Professional Conduct 1.1, competence dictates that one must “…keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology…” Luddite lawyers are not exempt. Ignorance is now incompetence. So, here’s a crash course with the fundamentals. This includes how artificial intelligence works so one can better understand AI’s output based on what’s going on under the hood.
Generative artificial intelligence, what one may know as ChatGPT, Claude, or other brand names, creates new outputs from learned data. In the lawyer world, we’re most likely to encounter outputs from large language models, known as LLMs. These models use billions of words harvested from various places (hence the intellectual property lawsuits from authors and the New York Times), look for statistical relationships between words, letters, and numbers, and produce information from the relationships. Ask a model a question and you’re tapping into this. An example? Ask ChatGPT for California’s state capital. This could produce Sacramento (present day), Vallejo (1852-1853), Benecia (1853-1854), or San Francisco (1862). While most would be looking for Sacramento, the other three are also correct, just less likely to appear. This differs from what are known as hallucinations, or situations where the model creates factually incorrect information based on statistical relationships. The New York lawyer who submitted a brief researched by AI without checking citations learned this the hard way. The LLM saw statistical relationships between case citation formats and produced some. Unfortunately, they weren’t real, a fact the federal judge was less than pleased about.
The American Bar Association’s Formal Opinion 512, Generative Artificial Intelligence Tools, tackles AI uses’ ethical implications. It is worth a full read. A massive oversimplification? (1) One must check AI’s work and signing one’s name to its output means one owns any errors, (2) one must still keep client information confidential, (3) one must be transparent with clients and the court about AI use, and (4) one can’t bill massive hours for what AI spat out instantaneously. The concerns about factual issues, client confidentiality, and sometimes clunky language raise a logical question: What is generative AI actually useful for in the practice?
Confidentially, what’s AI good for?
Let’s address confidentiality first. Here’s where Anthropic’s Claude shines. As of this writing, Claude has a version that is HIPAA-compliant. This means it doesn’t learn with the data one dumps in. In other words, one can put a client’s medical records into Claude and ask for a summary without worry that those records will appear in some stranger’s Claude use next week. Information sandboxing is a hot topic right now and varies from product to product. Before one turns on Microsoft’s CoPilot AI, Adobe’s AI, or any other AI service, one needs to know where that data is going or risk violating confidentiality rules.
With confidentiality addressed, one is limited only by one’s imagination and prompting skills. Learning good prompting makes all the difference in output. The answers generated by AI improve with context. “Imagine you are a personal injury lawyer preparing for trial and the defense has disclosed an expert whose deposition has been taken many times before. I am dropping those depositions into the chat. I want you to review them for inconsistent testimony, with citation to the deposition, pages, lines, and a one-sentence summary for each inconsistency.” AI becomes a powerful research and drafting tool when given the right guardrails.
It ain’t me
Generative AI isn’t yet ready to replace one’s own language. I know — I’ve tried. I dumped 160 prior articles I wrote (about 128,000 words) into an AI window and prompted it to use my language and tone to write 800 words on what lawyers need to know about AI. The result was a mildly informative but chunky word salad with a few try-too-hard obscure movie and hip-hop references that a Reagan-era kid would get. Similarly, one can ask CoPilot to draft all one’s email responses, but for now they’ll sound exactly like what they are: AI-generated responses. That will likely change soon. In the short term, if one is looking for a “Wow!” try dumping the next thing one needs to learn (20 peer-reviewed articles debating glyphosate’s carcinogenic propensities, for example) into Google Notebook LM and ask it to create an audio overview. The resulting insta-podcast, complete with humanizing verbal tics, is jaw-dropping. We humans may need to give up our podcasting jobs soon.
More broadly, expect AI tools for the legal field to improve significantly in the short term. AI won’t replace lawyers. But much like the textile machines loathed by the 19th-century Luddites, AI will allow lawyers who adopt it to do far more in less time. Adopting it now and being creative with its use in improving workflow will provide one with a significant advantage over those who don’t.
A version of this article originally appeared in Plaintiff magazine’s December 2024 issue, where Miles has written his monthly Back Story column for almost 15 years. Interested in Plaintiff and its coverage? Read more at plaintiffmagazine.com.
Coopers LLP helps seriously injured people and accepts referrals and co-counsel opportunities from lawyers. We excel in strategizing. Have a matter you’d like to brainstorm? Contact us at (415) 434-2111 or info@coopers.law.
Coopers LLP has lawyers licensed in California, Oregon, and Washington State, and can affiliate with local counsel on matters where Coopers can make the difference.