One of the things I frequently warn attorneys against is trusting AI to summarize things properly. Sometimes, when summarizing the law, AI gets it completely wrong. One example is the recent Pennsylvania Superior Court opinion for Saber v. Navy Federal Credit Union.
A colleague of mine sent the opinion to me, so I asked Claude to summarize it so I could decide if I wanted to write about it. Based upon Claude’s explanation I thought, yes, great post! Then I read the opinion and realized that Claude was wrong.
Claude’s Summary of the Opinion
Here is the relevant part of the summary for our purposes:
The AI hallucination angle — the really notable part: The court found that Saber’s appellate brief cited multiple cases that don’t exist, including a fabricated version of D’Happart v. First Commonwealth Bank with incorrect reporter citations. The court concluded Saber likely used generative AI to draft his brief, noting that his citations to “several other cases which ‘do not exist'” suggested AI hallucinations. The court cited the Colorado Court of Appeals decision Al-Hamim v. Star Hearthstone, LLC for the proposition that GAI tools are known to fabricate case names and citations, and also referenced Sanders v. United States from federal court.
The holding: The court found Saber waived his claims on appeal because his brief failed to cite pertinent, actually existing authority, failed to connect PUCC provisions to the record, and failed to develop cognizable arguments under Pa.R.A.P. 2119(a) and (c). The court also explicitly warned that AI-generated legal filings without verification of accuracy may constitute misstatements of legal authority and undermine the reliability of the law.
This is a great addition to your growing collection of AI sanctions/hallucination cases — right up there with Kettering Adventist and Deutsche Bank v. LeTennier.
What Claude Got Wrong
Yes, the Court ruled that the appellant waived his claims on appeal. No, the Court did not rule that the appellant waived his claims because he used a fabricated case. While the appellant did use a fabricated case, and the Court noted this and provided a detailed analysis in a footnote, the reason the appellant waived his claims is because of a very simple principle, that is, that “…Appellant has failed to develop cognizable arguments with discussion and citation to relevant authority or where in the record the matter appears, we conclude that he has waived his claims on appeal.”
I went back to Claude, as I often do when AI makes a mistake, and told it that it misread the case. It responded:
You’re right, and that’s an important distinction. Thank you for correcting me. The court didn’t find waiver because of AI hallucinations specifically. It found waiver because Saber failed to develop cognizable arguments under Pa.R.A.P. 2119(a) and (c) — he didn’t cite pertinent authority that actually existed, didn’t connect the PUCC provisions to the record, didn’t reference where in the record the matters appeared, and didn’t develop his legal arguments. The AI hallucinations were evidence of that failure, not the independent basis for the penalty.
The Relevance of AI in the Opinion
There is relevance of using AI in the opinion. Appellant cited a case that doesn’t exist for a principle that he needed to be true to sustain his case. Essentially, appellant wanted to believe that he could have a car without paying for it because he found a fabricated citation that told him so. Also, it seems as if appellant cited to provisions in the PUCC but failed to show the relevance of those provisions. I will hypothesize that appellant found these provisions in AI and that the AI hallucinated the actual content of the provisions or appellant simply failed to understand the law.
I suspect, though it is not mentioned in the opinion, that appellant is pro se. It is common for AI to confidently misstate the law, even when it provides an accurate citation to an actual case, code, or statute. This fools many lawyers as well as many laypeople. It is that very confidence that is part of the problem.
Conclusion: Always Check Summaries of the Law
Whether you are using a consumer-based AI or a well-known and respected legal research tool that has an AI component, always read the actual case, and make sure you do not double-check cases with AI. Every AI hallucinates and makes assumptions. It is simply how they work. Instead, use the traditional versions of the tools, e.g. Lexis, Westlaw, Fastcase, Decisis, or even Google Scholar.