How to Deal with Artificial Intelligence When Conducting Legal Research
Over the past year, courts have had to confront an uncomfortable and increasingly common problem involving artificial intelligence and misinformation. Briefs have been filed citing cases that do not exist or inaccurately describing holdings. Judicial opinions have been issued and later withdrawn after it became clear that the authority relied upon was fabricated.
In each of these situations, artificial intelligence played some role in drafting or research. In none of them was AI the core issue. The real problem was a failure somewhere in the process to confirm that a cited case actually existed and that its holding was accurately represented. In short, someone failed to check the law before relying on it.
A Recent Case from the Eastern District of Pennsylvania
A recent example illustrates the issue clearly. In a January 2026 Memorandum, Judge Kearney of the Eastern District of Pennsylvania addressed a filing that relied on multiple nonexistent or mischaracterized cases. Local Pennsylvania counsel admitted that he did not read or review the cited authority and simply signed the brief. The incorrect citations originated with New York co-counsel, where a law clerk played a role in preparing the filing.
That law clerk, who was new to the office, used artificial intelligence tools to assist with drafting without informing the attorneys responsible for the case. After the issue came to light, lead counsel terminated the law clerk and acknowledged the errors to the court.
Judge Kearney declined to impose monetary sanctions on local counsel and chose not to refer either lawyer to disciplinary authorities. Instead, he ordered the attorneys to distribute the court’s memorandum and their updated artificial intelligence policies to bar organizations and professional groups for broader dissemination. The court’s response was educational rather than punitive, though it was plainly intended to reinforce accountability and to underscore the seriousness of signing a filing without verifying its contents.
The court also noted that the law clerk relied on Lexis+ AI and LexisNexis Protégé as primary research tools. This point matters. These were not obscure consumer products. They were mainstream legal research platforms. The failure was not the choice of tool, but the absence of verification.
I must make a point on behalf of Lexis here, though. I was recently speaking with a Lexis representative, and we specifically spoke about this case. The representative told me they had looked into the situation, and the law firms in question did not have a Lexis account.
Judges Are Confronting the Same Problem
Lawyers are not the only ones grappling with this issue. In late 2025, two federal judges publicly apologized after issuing opinions that contained hallucinated citations generated during the drafting process. In each case, an intern or law clerk used generative AI in violation of chambers policy, and the errors were not caught before the opinions were released.
While these incidents were unusual, they were not isolated. They highlight how easily unchecked citations can move through even sophisticated workflows when assumptions are made about review and verification. As Josh Blackman observed in his discussion of these cases on The Volokh Conspiracy, judges are likely to implement additional safeguards to prevent similar errors from leaving chambers in the future.
What the Pennsylvania and Philadelphia Bar Opinion Says
Against this backdrop, the Pennsylvania Bar Association and the Philadelphia Bar Association issued Joint Formal Opinion 2024-200 addressing ethical issues related to the use of artificial intelligence. The opinion does not prohibit AI or attempt to create new categories of misconduct. Instead, it focuses on how existing professional duties apply when lawyers choose to use these tools.
Verification Is Not Optional
Most relevant here, the opinion states plainly that lawyers must ensure the accuracy and relevance of citations used in legal documents and must verify that cited authorities accurately reflect the content being referenced. Because generative AI creates content rather than merely retrieving it, the opinion emphasizes that lawyers have an obligation to read and verify cases, statutes, and other sources before relying on them.
Existing Duties Still Apply
The opinion treats this obligation as an extension of familiar duties, not a new compliance burden. Competence, candor, supervision, and truthfulness already require lawyers to understand and stand behind their work. Using AI does not shift that responsibility any more than assigning research to a junior associate or a law clerk would.
Where Things Actually Break Down
What is striking about the recent sanction and withdrawal cases is how ordinary the failure often is. The problem is rarely that a lawyer used AI to get oriented or to generate an initial draft. The problem is that the lawyer never performed the final step of opening the cited authority and reading it.
As drafting becomes faster and easier, the temptation to skip that step increases. Courts are making clear that this temptation will not be indulged.
The Role of External Verification Tools
This is where external verification tools matter, not because they solve the problem on their own, but because access is not the limiting factor many lawyers assume it is.
Google Scholar
Google Scholar remains a practical and free way to confirm that a case exists and to read the underlying opinion. It is not comprehensive and it is not perfect, but it is often sufficient to answer the most basic question that matters before anything else. Does this authority actually exist, and does it say what the draft claims it says.
Benchly
Tools like Benchly are designed to assist with citation validation during drafting. Benchly integrates into Microsoft Word and focuses on identifying nonexistent or mismatched authority. It does not evaluate legal reasoning or persuasiveness. Its value lies in helping catch situations where something looks like law but is not.
Decisis for Pennsylvania Lawyers
For Pennsylvania lawyers, Decisis is a legal research platform provided as a member benefit through the Pennsylvania Bar Association. It provides access to Pennsylvania and federal case law and serves as a baseline research and verification tool for practitioners who do not maintain other subscriptions. Like Google Scholar, its usefulness here lies in reliability rather than sophistication.
What the Ethics Opinion Ultimately Requires
None of these tools replace professional judgment. The ethics opinion does not suggest that technology can cure technological risk. What it makes clear is that lawyers already have multiple ways to verify their work, and that failing to do so is not excused by time pressure, delegation, or reliance on AI.
Joint Formal Opinion 2024-200 emphasizes that lawyers must exercise independent review of AI-generated content and remain responsible for its accuracy, particularly where citations and factual assertions are involved. It also underscores that when false or invented authority finds its way into a filing, the lawyer has an obligation to take remedial measures, just as they would if the error came from any other source.
A Simple, Unchanged Rule
The lesson from these cases is not that artificial intelligence is uniquely dangerous. It is that it magnifies an old risk. When drafting becomes easier, skipping verification becomes easier too. Courts are responding by reasserting a principle that predates every tool now in use.
If you cite it, you need to have read it. If you file it, you need to stand behind it. No tool changes that.