Ever since AI large language models (LLM) like ChatGPT burst onto the scene, a myriad of industries have taken advantage of them in the name of “efficiency.” But without adequate oversight, it quickly turns into flagrant corner-cutting. In industries where the stakes are high ā law, medicine, etc. ā one simple mistake can be catastrophic; and LLMs are known to have “hallucinations” that get critical information wrong.
Nevertheless, LLMs aren’t going anywhere anytime soon, and while they may be useful for preliminary searches and organizing information, they MUST be used with careful consideration and extensive review. I want to make Lewis Law Firm’s policies regarding AI clear so that you, as my client, can be 100% certain about what you get when you hire our team.
Why AI is dangerous for lawyers
Recently, an attorney in Utah was sanctioned when a law clerk used ChatGPT to draft a petition, which again cited non-existent cases. According to documents reviewed by ABC4, “It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter.”
In California, a judge called out two law firms working a civil case for filing documents that were completely made up by Google Gemini. It didn’t just cite one or two cases that didn’t exist ā the entire document was riddled with fake cases and made-up quotes. Ultimately, they are being fined $31,000 for their actions. Judges across the country are increasingly aware of this AI problem and are sanctioning the lawyers who appear in their courtrooms. Read more here: legal filings.
Further, even AI software specifically designed for lawyers gets things wrong. A recent Stanford study found that while bespoke AI-driven law software such as Lexis+ AI, Westlaw AI-Assisted Research, and Ask Practical Law AI do reduce errors compared to general-purpose LLMs like ChatGPT, they “still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlawās AI-Assisted Research hallucinated more than 34% of the time.”
The bottom line is blindly trusting a glorified content aggregator to draft legal documentsāwhich can heavily impact the lives of those involved in legal casesā is unethical and dangerous. It ultimately diminishes the strength of a case, exposes attorneys for being careless, and could cost them their careers.
When is AI appropriate? When is it not?
While we take this topic very seriously, we also want to make clear that we are not against using AI when itās appropriate. At Lewis Law Firm, we use AI for many behind-the-scenes tasks to help with efficiency.
At Lewis Law Firm, we may use AI to assist with repetitive, time-consuming, in-house tasks such as:
- Creating a medical chronology from thousands of pages of duplicative documentsĀ
- Creating deposition summaries after we have taken depositions and read them ourselves
- E-Discovery
- Obtaining medical records from third parties
However, we will never use AI for the following:
- Drafting any kind of legal document
- Writing legal briefsĀ
- Drafting opening statements or closing arguments for trial
- Communicating with clients
- Negotiating settlements with opposing counsel and mediators
AI is not a trial lawyer and will never be able to replace 28 years of legal experience and a law degree. It lacks judgment, empathy, creativity, and the ability to navigate human nuance, which are all the qualities of a good attorney.
AI is now an inevitable part of our lives, but any attorney worth consulting knows it must be used responsibly.
If you have any questions about my personal policies regarding the use of these programs, give us a call. We are not afraid to offer full transparency.



