Stop Blaming AI for Old Mistakes

Many judges and lawyers are likely to assume that a citation that is close but not quite right or a case cited for an incorrect proposition is due to AI. The reality is that the problem of misstated precedent is hardly new. When a colleague brought the case I am writing about to my attention today, she explained that a New Jersey federal court issued sanctions against an attorney who filed a brief riddled with citation errors. In the end, it was determined that AI was not used. Rather, an inexperienced paralegal had made mistakes.

As my colleague told me about the New Jersey case, I remembered a case I was researching many years ago, long before generative AI existed. In that case, opposing counsel in a matter I was researching misstated every single case they cited. If I recall correctly, opposing counsel cited six cases, all of which they cited for the opposite proposition for which the cases stood. As it happened, all six of those cases helped my client and hurt theirs. At that time, I found myself wondering whether opposing counsel hadn’t read the cases, or, less charitably, was lying about the content of the cases, hoping my firm wouldn’t check. I imagine, if I experienced something similar today, I would assume that opposing counsel used AI, the AI messed up the holdings, and the attorneys failed to read the cases they were citing. The reality is that not everyone is using AI to conduct their research, and even if they are, they may simply misstate a holding or citation through error or intentional malfeasance.

In the memorandum order in Gutierrez v. Lorenzo Food Group, Inc., the court noted that both the defendants and the court “could not verify several quotations and citations within Plaintiff’s brief opposing Defendant’s motion to dismiss.” (citations omitted). In response to this problem, Judge Padin ordered Plaintiff to produce information related to the “problematic quotations and citations.” This led to months of effort to identify the source of these quotations and citations. It seems that Mott, representing the Plaintiff, his paralegal (Berrent), and a former associate (Hiers), provided conflicting affidavits regarding the source of the problems in the brief. As a result, the court held a hearing in which these individuals, “could testify regarding their conflicted accounts of the MTD Opposition drafting process.”

In the end, the court found that Mott violated several of New Jersey’s Rules of Professional Conduct. Specifically, Rules 1.1 (Competence), 5.1 (Responsibility to Supervise Attorneys), and 5.3(a) (Responsibility to Supervise Nonlawyer Assistance). Fundamentally, Mott failed to supervise either the attorneys or the non-attorneys, lacked proper procedures for oversight, and, as a result, allowed the errors in his brief to escape his attention.

Footnote four in the memorandum notes:

“Initially, both Defendants and the Court were unsure whether Mr. Mott or anyone at his law firm
used generative artificial intelligence (“GAI”) in drafting the MTD Opposition. That speculation
was driven by the fact that Plaintiff’s MTD Opposition contained real quotations attributed to the
wrong cases. After months of uncertainty, the Court has reached the conclusion that no party used
GAI; instead, a person made the lamentable decision to attribute quotations to the wrong cases.”

The court elaborated in the same footnote that it does not matter whether generative AI was used, because the underlying problem is the same, regardless of the source. It is the job of any attorney signing a brief to make certain that the “arguments and contentions made within [a brief] are accurate and supported by existing law.” Unfortunately, in this case, Mott failed to do as the ethics rules require, resulting in a brief with numerous errors.

Further compounding the errors, both Mott and Berrent blamed Hiers, which the court notes was an unfounded accusation. Regardless of how the mistakes came to be present in a brief, it is always the attorney(s) signing that brief who will be held responsible. Blaming others is not only a bad look, but it will not help in court. It is always better for the lawyer in charge to take responsibility for the errors, because any attorney signing a court document is obligated to make certain that the document is correct. This is the case under federal Rule 11 and the corresponding state rule, along with ethics rules related to competence, supervision, candor, and general misconduct.

Part of the issue here is that it took months of investigation, a hearing, three witnesses, and conflicting affidavits for the court to conclude that the errors were not due to AI. Rather, a paralegal misattributed quotations to the wrong cases. In addition, the brief was never Shepardized to make certain that the cases used were still good law. Further, the supervising attorney, Mott reviewed only the first draft. Finally, a former associate gave unclear instructions, which further compounded the problem. In the end, the errors were human, start to finish.

The Errors Were Not New

Bad citations predate ChatGPT by decades. Attorneys have always cited cases that don’t support the propositions for which they are cited. They have always cited cases that were subsequently overruled. They have filed briefs without checking whether the law they relied on was still good law. These are failures of process and supervision, not failures of technology.

What happened in Gutierrez v. Lorenzo Food Group was straightforward. A paralegal was drafting substantive legal briefs. She was directed to replace out-of-circuit citations with Third Circuit cases. She misunderstood the instructions and swapped in Third Circuit citations while leaving the original quotations in place. No one cite-checked the final brief or double-checked the quotations. And, as noted, the partner in charge reviewed an early draft and signed off without checking the final document.

That is a supervision failure. Under New Jersey’s Rules of Professional Conduct, it is the attorney’s obligation to ensure that nonlawyer staff conduct is compatible with the attorney’s professional obligations. The court found violations of Rules 1.1 (Competence), 5.1(a) (Supervisory responsibilities for law firms), and 5.3(a) (Responsibilities regarding nonlawyer assistance). None of those violations required a computer.

The AI Assumption Creates a Blind Spot

When the default assumption is that citation errors came from an AI hallucination, two things happen. First, the investigation is aimed at the wrong question. In this case, the court and opposing counsel spent months trying to determine whether AI was involved before getting to the actual problem. Second, the underlying supervision failure gets obscured. If the story is “AI made up a citation,” many will blame the AI and frame it as a technology problem. However, it is important to remember AI failures are also supervision failures because either the lawyer failed to review someone else’s work or failed to review the AI’s work. On the other hand, if the situation is that a paralegal was drafting briefs without adequate oversight and no one Shepardized the final product, that is a management problem with a long paper trail.

The court was direct about this. The opinion states plainly that whether a person or a large language model made the errors is irrelevant. It is the senior attorney’s obligation under ethical rules and Federal Rule of Civil Procedure 11 to check the content and factual claims in the brief. The attorney(s) who sign the brief are responsible.

What the Sanctions Actually Reflect

The court ordered monetary sanctions, two CLE courses (one on AI, one on ethics), and completion within 90 days. The attorney avoided disqualification because the client wanted him to remain her counsel. It is important to remember that the core sanctions were about supervision, not technology. The problems that many attorneys have with AI also involve supervision.

The Practical Takeaway

Every brief that leaves your office is your brief. It does not matter whether a paralegal drafted it, an associate revised it, or an AI tool assisted with research. If your name is on it, you are responsible for its accuracy. And if you are the supervising attorney, regardless of whether your name is on the brief, you are responsible for supervising attorneys beneath you and their staff.

That has always been true. It will remain true regardless of what tools your firm uses. The supervision obligations that existed before AI existed, and the failure modes in Gutierrez are the same ones that have generated sanctions, malpractice claims, and bar complaints for as long as attorneys have delegated work to others without adequate oversight.

The AI conversation is worth having. But it should not distract from the reality of the problem all attorneys face when they are responsible for signing a document or supervising other attorneys and staff. AI problems are relatively new, but they are not a substitute for the older and less exciting conversation about proper supervision.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories