Four Strategies for Legal Professionals to Reduce AI Hallucinations

Quite a few attorneys have gotten in trouble and/or publicly humiliated for citing non-existent cases obtained via artificial intelligence, such as ChatGPT. Most of these attorneys claimed they were unaware that AI could fabricate cases. There are several issues with this excuse. First, attorneys are obligated (in most, if not all, jurisdictions) to understand and properly utilize any technology they use in their practice. Second, failing to ensure a case exists means that no one involved in drafting the document has read the case. Failing to read the case means no one verified that the case states what they claim it states. This is simply poor lawyering. The question is, how do attorneys ensure that the cases they cite are real? The simple answer is to find and read the case. However, the underlying question remains: why do hallucinations occur, how can they be identified, and what steps can lawyers take to minimize the risk?

What is Hallucination and Why Does Artificial Intelligence Make Up Information?

A hallucination in artificial intelligence occurs when the AI fabricates an answer. According to OpenAI, one reason artificial intelligence hallucinates is its training process. Essentially, AI is rewarded for guessing and not for withholding an answer. As Open AI’s research paper states, “language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty.” In addition to the training issues, AI has other problems that can lead to hallucinations. For example, since the data contained in AI comes from human beings, it inherits the same biases that humans tend to have. This affects the quality of the data and the responses.

Tips for Preventing and Dealing with AI Hallucinations

Once lawyers understand that AI can and will make up answers, it is critical to find ways to minimize the risk. How can we achieve this? Here are some suggestions.

  1. Ensure you choose the correct tool for the job. There are many options available. When performing legal research, it is wise to use a dedicated legal research tool, one that is more likely to contain a database of the information you need. Keep in mind that while a legal research tool such as Lexis or Westlaw is unlikely to fabricate the existence of a case, it can still misinterpret the case’s content. For example, when using AI in a legal research tool, the author found that while all the cases existed and some were relevant, others did not state what the AI claimed. This is why it remains critical to read the cases.
  2. Verify information with alternative sources.
    • Do not prompt the same AI to confirm the information’s accuracy.
      • Often, the AI will double down and insist that the information is accurate. Why does AI double down? It is not proficient at evaluating the information it provides.
      • Many AIs provide links back to the source. Check the source to see if it agrees with what the AI claims. Ensure you evaluate the quality of the underlying source as well.
  3. Craft effective prompts. Prompting is how users instruct AI on what they want it to do. Learning to write the correct prompt for your needs can be challenging. Here are some tips on how to improve your prompts.
    • Specify the type of sources you want the AI to use to gather information. You can apply other constraints as well. This minimizes the chance of the AI making a mistake.
    • Be clear about what you want and tell the AI your goal.
      • Provide context so the AI can understand the purpose of your request.
      • Ask follow-up questions. AI is conversational, and you may need to ask several questions to get the answer you need.
      • Practice makes perfect. If you recall, it probably took you some time to learn how to search Lexis and Westlaw properly or how to write a Google search that yielded the right results without overwhelming you. AI prompting is similar. Give yourself time to practice.
      • Look for sample prompts. For example, Microsoft Copilot has a prompt library. There are also many prompt libraries online, including those specific to law practice.
        • Consider taking a course on prompting.
  4. Do not forget that human beings need to remain involved in legal work. While AI may perform certain tasks autonomously, it is important to periodically audit the AI’s results and have humans check the work. The issue of lawyers failing to check the legal opinions they cite is a prime example.   

Do Not Fall into the Trap of Overestimating AI

It is critical to remember that AI, like all technology, is imperfect. If you plan to use AI in your legal work, ensure you understand its risks and benefits, and take steps to mitigate the risks. Do not overly rely on AI to the extent that you fail to act as a competent member of the legal profession. The potential for using incorrect information provided by AI is a trap you can easily avoid if you are an informed and careful user.

*This article was originally published by the Legal Technology Resource Center of the Law Practice Division of the American Bar Association.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories