AI is already in your firm. The question is whether anyone has written down the rules for using it.
Lawyers, paralegals, and staff are using ChatGPT, Claude, Microsoft Copilot, Lexis+ AI, and other generative AI tools right now. Some firms have made deliberate decisions about which tools to adopt and how. Many have not. In those firms, people are figuring it out on their own. That is a problem.
A written AI policy does not require your firm to adopt AI. It does not require anyone to use a particular tool. What it does is establish expectations. It tells people what is approved, what is not, and what safeguards are required. Without that, you are relying on individual judgment across every person in your office, with no shared understanding of the ethical obligations at stake.
Courts Are Not Waiting for You to Catch Up
Most lawyers have heard of Mata v. Avianca by now. Attorneys submitted a brief with fabricated case citations generated by ChatGPT. The court-imposed sanctions. That was 2023. Mata is far from the last time that lawyers have gotten in trouble for using fabricated citations or misstated holdings. For example, in Flycatcher Corp. v. Affable Avenue, the judge entered default judgment against the client after the attorney repeatedly submitted AI-hallucinated content despite prior warnings from the court. In Lifetime Well, LLC v. IBSPOT.COM Inc., a New York attorney and the Pennsylvania associated counsel, both were sanctioned.
A database kept by Damien Charlotin is at 1030 cases and climbing. It is important to note that included in the list of sanctions are pro se individuals and 400+ are lawyers. The database covers the world, but the bulk of the cases, by far, are in the USA. The pattern across these cases is consistent. The problem is not that the lawyers used AI. The problem is that the sanctioned lawyers used AI without verifying the output. In addition to failing to verify output, some attorneys fail to reveal AI use when required by the court. For example, in Kenosha County Wisconsin, the district attorney was sanctioned due to hallucinated cases and the failure to reveal AI use when the court’s policy required such disclosure.
While it is impossible to know if the law firm’s or district attorney’s office had written AI use policies, it is likely they did not. Written AI policies are designed to prevent the types of errors and improper AI use that occurs in the majority, if not all, of the sanction cases that occur.
What an Artificial Intelligence Use Policy Should Cover
A comprehensive AI policy for a law firm addresses several areas. None of them are optional if you are serious about managing risk.
- Approved and prohibited uses. Which AI tools does the firm allow? Which are approved for specific categories of work but not for others? Which tools are off-limits entirely?
- Confidentiality and data protection. Most AI tools process data on external servers. Not all plans are created equal. A consumer-tier AI subscription is not the same as an enterprise agreement with contractual data protections. Your policy needs to address what information can and cannot be entered into these tools, consistent with your obligations under Rules 1.6 and 1.9.
- Human review requirements. AI output should never go directly to a client or a court without attorney review. The policy should specify who reviews, what they are checking for, and how that review is documented. This is not a suggestion. It is the floor.
- Disclosure obligations. A growing number of courts require or encourage disclosure of AI use in filings. Some have issued standing orders. Your policy should establish when and how your firm discloses, rather than leaving it to individual lawyers to figure out in real time.
- Training and competence. The duty of competence under Rule 1.1 includes an obligation to understand the technology you use in your practice. That is not new. Your policy should include training requirements on both the policy and the allowed tools so that everyone using AI tools understands what they are doing and what the risks are.
- Compliance and accountability. Who oversees AI use at the firm? How are policy violations handled? A policy without accountability is not a policy. It is a document.
Why I Built an AI Policy Template to Share with Law Firms
I consult with law firms on AI implementation. I serve on five ABA boards, including those covering ethics, technology, and professional development. I am a founding co-chair of the PBA Technology Committee. I write and speak about AI in legal practice regularly. As a result of this experience, I can tell you that the number one barrier to firms adopting a responsible AI framework is not resistance. It is fear and inertia. Lawyers know they need a policy. They know they need to explore artificial intelligence. They do not know where to start and they are afraid of violating the ethical rules when they use AI.
These are the reasons that I created a comprehensive AI policy template. It covers approved uses, prohibited uses, confidentiality protocols, human review requirements, disclosure guidelines, training expectations, and compliance procedures. It includes a tools chart where you can document which specific AI tools your firm has evaluated and approved for each category of use.
Download the AI Policy Template
If you would like a copy of the AI policy template, enter your name and email below. I will send it to you immediately. No cost, no obligation.
Note
The artificial intelligence use policy template is designed to be customized by attorneys. No two firms have the same practice areas, risk tolerance, or technology stack. But every firm needs a starting point. If you are not an attorney, you may share the template with an attorney and have them review it, but please keep in mind, this AI policy is specifically meant for use by law firms.
Complete to Receive Jennifer’s AI Policy Template
Thank you!
You have successfully joined our subscriber list.
If your firm needs help customizing this policy, evaluating AI tools, or training your team on responsible use, I am available for consulting and firm training engagements. You can reach me at jennifer@jlellis.net.