Pro Se Litigants: The Other Half of the AI Hallucination Problem

First, I want to correct something I have written in earlier posts. I sometimes have turned to Damien Charlotin’s AI Hallucination Cases website to discuss how many lawyers have gotten in trouble due to using fabrications in court filings. Well, I did not realize that approximately one half of the cases he is reporting were not by lawyers, but were pro se. I would like to thank James Rice for very kindly pointing out my error. I have done my best to go through my site and make corrections.

Moving on from my own mistakes, does this change in statistics tell us the problem is only half as bad as we thought it was? Well, I still see fabricated cases from lawyers every day, so no, I don’t think the problem is less serious at all. But it does tell you one thing, and that is that I am willing to own up to my mistakes.

This leads me to something I want to emphasize. If a lawyer uses AI and mistakenly includes a fabricated case, if it is the first time they did so and they admit it immediately, or even better send a correction to opposing counsel and the judge as soon as they realize the problem, the sanctions they will suffer, if any, tend to be much lower. This is why I advise lawyers to immediately own up to their errors. Those errors might be due to AI, or they may be due to a simple typographical error. These days, it is hard to know. The obligation to correct misstatements remains the same, regardless of the source of the misstatement.

If you want to see an example of how not to respond when questioned about AI fabrications, here is a video from a New York Appellate Court. The AI discussion starts about 3:18 in. After the discussion with defense counsel, plaintiff’s counsel is asked to address the issue. NYS Unified Court System

Pro Se Litigants Using AI Have Become a Serious Issue

Second, I want to use this opportunity to write about pro se litigants and their use of AI. I once went up against a pro se litigant. It was, I have to tell you, one of the strangest experiences in my life. It also made the case substantially more expensive for my client; despite the fact I cut my fee repeatedly. I just felt terrible for the amount of work I had to do to argue against the absurd legal theories the plaintiff insisted on arguing. This case was quite a few years ago, and fortunately, it ended with a summary judgment motion I filed after the plaintiff appealed their loss once we completed the arbitration process. During that arbitration, I had to argue biblical theory and the definition of the word “whore” in front of the panel. Not something I will soon forget. The original complaint by the pro se litigant was somewhat cogent, but the appeal, well I actually had to include a preliminary objection over “scandalous and impertinent matter” in my response.

I cannot begin to imagine the things that the pro se plaintiff in my case would have come up with had they had access to artificial intelligence. Fortunately, they did not.

The Confidence of AI Fools Many

Now pro se litigants do have AI and they are using it to its fullest and messiest extent. Most of them are probably using free or inexpensive consumer tools. These tools are more likely to fabricate entire cases than are purpose-built tools, which studies such as the one from 8am, show, are generally better for legal work.* But tools like Claude, ChatGPT, and yes, even Perplexity, will make up cases completely. As a result, if a pro se individual who doesn’t understand the law the way a lawyer does finds that ChatGPT is confidently writing about a legal position which is entirely wrong, or maybe even just slightly wrong, it is no surprise that they believe it. Especially if the AI is trying to reassure the individual by making them feel better about their case. AI might even be convincing pro se litigants that they have cases when they don’t.

Any lawyer reading this knows why pro se litigants improperly using AI is such a problematic issue. I would not be surprised if more pro se litigants are bringing more cases in which they are confidently wrong due to AI. This, in turn, means that just as in my case all those years ago, it is critical to be ready to deal with the pro se litigants. Just as it is important that we verify all factual information and cases from our own research, it is important that we verify the factual information and citations from pro se litigants. It is, after all, our job to protect our clients. And that includes protecting them from cases based on AI fabrications. The sooner we can identify the problems in pro se litigants’ cases, the faster the problem can be resolved, potentially even the case dismissed, saving our clients’ money from unnecessary fights in court.

Verifying Your Colleague’s Citations and Facts Remains Critical

Of course, not only do we need to be wary of pro se litigants using fabricated cases and facts. We need to worry about opposing counsel doing so too. Which is why, regardless of the opposition, it has become even more important to double check the work of our opponents, or, if we are co-counsel or associated counsel, the people on our own team.

*Note: Purpose-built legal tools still hallucinate, reassure, and do all of the unfortunate things for which AI is known. For example, purpose-built legal research tools don’t tend to make up cases that don’t exist, like consumer tools. They are more likely to misstate a holding or identify a case as relevant when it isn’t. They are also just as likely to make inferences which you didn’t ask them to make.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories