When AI Hallucinations Meet the Courtroom: How One Lawyer’s Repeated Failures Led to Default Judgment

The Case of Flycatcher v. Affable

I must admit that despite my fondness for covering artificial intelligence, and the fact I use AI to help me craft my posts, even I am having trouble keeping up with the number of lawyers getting in trouble for hallucinated cases. I knew I hadn’t seen everything yet, but this particular case, Flycatcher v. Affable, contains an unusual procedural history.

On February 5, 2026, in Flycatcher Corp. v. Affable Avenue LLC, U.S. District Judge Katherine Polk Failla entered default judgment as a sanction against Affable Avenue, LLC. Affable lost not because the merits were weak (that was never decided), but because its lawyer repeatedly cited hallucinated cases.

A Cautionary Case Study in Generative AI

The problems seem to have started in June 2025, when attorney Steven Feldman filed a motion to dismiss on behalf of his client, Affable Avenue LLC. The brief contained at least thirteen cases that didn’t exist at all, and eight more that existed but didn’t contain the quotes Feldman attributed to them.

The hallucinated cases came to light when attorney Joel MacMull, representing co-defendant Top Experience Company, noticed something was off. After MacMull discovered the problems with the citations, he notified Feldman via email. He requested that Feldman fix the issues by 5:00 p.m. the next day or he would notify the Court.

Problems Verifying Citations

I would be remiss if I failed to note that legal research databases can be expensive. I do not know Feldman’s financial situation, but I do understand that some lawyers cannot afford tools such as Lexis or Westlaw. That said, there are other ways to verify cases. Some are low-cost, and some, such as Google Scholar, are free. Apparently, Feldman expressed concerns about the cost of such databases and noted that he was “unable to verify certain citations.” He also explained that he had used various “public search engines and internal tools” to assist with “citation formatting and cross-checking.” Feldman also said that he chose to “accept[ ] suggested citation formats or assum[e] that references matched cases in my repository, without realizing they were incorrect.”

Eventually, MacMull submitted a letter to Judge Failla on June 26, 2025, alerting her to Feldman’s citation errors.

The Court Steps In: An Order to Show Cause

Judge Failla issued an Order to Show Cause directing Feldman to explain by July 10, 2025, why his brief shouldn’t be stricken and why he shouldn’t be sanctioned under Federal Rule of Civil Procedure 11. The Court made clear that attorneys have a professional obligation to “read and confirm the existence and validity of the legal authorities on which they rely,” especially when using AI tools that are known to hallucinate. Feldman filed his response on July 11, 2025.

Feldman’s Response

I do not wish to pile on Mr. Feldman. I rarely use names in blog posts or PowerPoints, but it is clear to me that those of us who analyze artificial intelligence and ethics are going to use this case for years to come. I do not wish to mock the language he used, but it is, I am sorry to say, unusual.

“Your Honor, in the ancient libraries of Ashurbanipal, scribes carried their stylus as both tool and sacred trust—understanding that every mark upon clay would endure long beyond their mortal span. As the role the mark (x) in Ezekiel Chapter 9, that marked the foreheads with a tav (x) of blood and ink, bear the same solemn recognition: that the written word carries power to preserve or condemn, to build or destroy, and leaves an indelible mark which cannot be erased but should be withdrawn, let it lead other to think these citations were correct.”

The meaning of this passage is unclear.

In addition to the unusual language noted above, in his Response, Feldman sought to distinguish his conduct from Mata v. Avianca, in which lawyers were sanctioned for submitting ChatGPT-generated fake cases. Again, unfortunately, a Google search revealed the quote that Feldman used came not from Mata itself but from a blog post analyzing that case.

Since the lawyers in Mata were sanctioned for “failure to be forthcoming, withdraw the prior submissions, and continue to give legitimacy to fake cases…” Feldman undermined his own argument by including another incorrect citation.

On July 18, Judge Failla denied Feldman’s request to withdraw and instead scheduled a conference in which Feldman could explain himself in person. She wrote, “The Court wants to hear directly from Mr. Feldman, so that it can give him the opportunity to—as he puts it—’prove [himself] worthy to carry the stylus once more in service of justice and truth.'” The judge also wrote, “Mr. Feldman must know how to verify that a case exists on Westlaw without the added benefit of AI tools… All lawyers must know how to do it. Mr. Feldman is not excused from this professional obligation by dint of using emerging technology.”

The Proposed Reply Brief

On August 8, 2025, while waiting for his sanctions conference, Feldman filed a letter requesting permission to submit a reply brief in further support of Affable’s motion to dismiss. He also filed the proposed reply brief itself. MacMull reviewed the brief and found more problematic citations. At this point, Judge Failla told Feldman not to file any more explanations and noted she was considering “a range of sanctions,” including default judgment.

The Sanctions Conference

On August 22, 2025, Judge Failla held a conference in which she placed Feldman under oath and began asking questions. The Court wrote that Feldman’s responses, “grew increasingly discursive and were often entirely unresponsive.” The transcript is filled with the judge trying to get answers:

“Sir, you are not answering my question.”

“Sir, once again, I’m really just asking you to answer my questions.”

“I keep asking you, and I’m not sure why you are refusing to answer me.”

“Sir, you are not answering my question. I’m not sure how many ways I can ask it.”

“But you are still not answering my questions, which is getting to the point of being frustrating.”


In the end, this is what the Court determined happened during the case. Feldman would collect cases from various sources and store them in electronic folders. When writing a brief, he’d pull from these folders. He’d use Google Scholar to try to verify citations, but if a case was unreported and only available on Westlaw, he preferred not to use the Westlaw citation. Instead, he’d try to find another case that cited the unreported case and cite it that way, a process that “tells you what some other court thought the case said” but is “definitely not” a legitimate way of cite-checking.

Before submitting his motion to dismiss brief, Feldman admitted he “didn’t have the time” to go to a law library to use Westlaw or Lexis. So, he ran his brief through three rounds of AI review, first through vLex, then through Paxton AI for cite-checking, then through AI again to “check again.” At no point did Feldman himself cite-check the brief. When the Court asked directly, Feldman confirmed: “You didn’t fully cite check the brief, before you submitted it to me.” Feldman responded, “Correct, I did not.”

Throughout the hearing, Feldman tried to minimize his responsibility. When the Court said “one-quarter of your cases were nonexistent hallucinations,” Feldman corrected her: “Fourteen out of 60 cited cases.” In response, Judge Failla responded, “The Court recognizes the mathematical truism that 14 out of 60 is less than one-quarter, but the fundamental point remains that it is 14 fake cases too many.”

The Court’s Ruling: Terminal Sanctions

In her opinion, Judge Failla found that Feldman violated Rule 11 “repeatedly and brazenly, despite multiple warnings from the Court and fellow counsel.” He submitted fake cases and misattributed quotes in his motion to dismiss brief. When ordered to show cause, he relied on AI to draft his response and included another faulty citation. Then, while awaiting a sanctions hearing, he spontaneously filed a proposed reply brief with yet another fake citation.

The Court found Feldman acted in bad faith. He either knew or “consciously avoided learning” that using AI as he did would generate faulty citations. Even after the Court explicitly warned him about the risks, he continued submitting problematic documents.

In making the decision to apply terminal sanctions, Judge Failla weighed five factors:

1. Was the misconduct intentional bad faith? Yes.

2. Did it prejudice the other parties? Yes, the Court and other parties expended significant resources investigating and responding to Feldman’s faulty submissions.

3. Was it a pattern rather than an isolated instance? Yes, three separate submissions with errors, despite multiple warnings.

4. Was the misconduct corrected? No. Feldman claimed he would fix his mistakes, but he never did. Instead, he kept making new ones.

5. Is further misconduct likely? Yes. Feldman offered no convincing plan to prevent future errors. His approach throughout was to minimize responsibility and offer improbable explanations.

The Court concluded that even if you accepted every word of Feldman’s explanation for how he drafted his documents (and the Court did not), he still violated Rule 11 by submitting cases without reading them. As the Second Circuit held in Park v. Kim, attorneys must at minimum “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.”

As noted previously, in the end, Judge Failla entered default judgment against Affable Avenue LLC. Feldman’s client lost, not on the merits, but because their lawyer couldn’t stop filing fake citations.

The Court also granted opposing counsel’s request to seek attorneys’ fees from Feldman personally under 28 U.S.C. § 1927, which allows courts to sanction attorneys who “multiplie[d] the proceedings in [a] case unreasonably and vexatiously.”

The Broader Implications

This case is important for several reasons. As more and more attorneys get in trouble for using hallucinated cases, judges are going to become less patient. As Judge Failla wrote, “Mr. Feldman is not excused from this professional obligation by dint of using emerging technology.”

I am a frequent speaker on issues related to AI and hallucinations, so I know that the American Bar Association, the Pennsylvania Bar Association, the Pennsylvania Bar Institute, and every other organization that provides continuing legal education across the country is addressing this issue. Numerous ethics opinions discuss how to deal with the potential for hallucinated cases. Mata may have been a surprise in 2023, but the problem shows no sign of slowing down.

What is especially unique about this case is that it demonstrates that the sanction for AI misuse can extend beyond the attorney to the client. It is our job to protect our clients. Again, I do not wish to be unduly harsh to Mr. Feldman, but this conduct falls below the minimum standard of professional competence. Rule 1.1 requires us to be competent. Reading cases, making sure they exist and they state what you claim they state is part of basic competence.

Last, as we have seen repeatedly, it is not the initial mistake that is the problem. We are human beings and mistakes happen. Refusing to take accountability, refusing to adjust conduct to fix the problem, being dishonest to the court, this is what leads to the most serious sanctions. Feldman wasn’t sanctioned for one mistake. He was sanctioned for a pattern of behavior that showed he was either unable or unwilling to learn from his errors. Even after the Court explicitly warned him, even after opposing counsel documented his mistakes, even after he knew he was facing potential sanctions, Feldman kept making the same mistakes.

The Court’s opinion notes that Feldman “has not, and apparently cannot, learn from his mistakes.” That’s a damning assessment.

What Lawyers Should Learn from Flycatcher v. Affable


If you’re a lawyer using AI tools, whether for research, drafting, cite-checking, or anything else, here’s what I suggest you should take away from this case:

1. You are responsible for every citation in your brief. If you can’t verify a citation, don’t use it.

2. AI cite-checkers are not a substitute for actual cite-checking. They’re a tool that might help, but they can also introduce errors. You need to verify the final product.

3. Read your cases. All of them. Not just the headnotes. Not just what another case says about your case. The actual case you’re citing.

4. If you make a mistake, fix it immediately. Just fix it, notify the court, and move on. Such an action is much less likely to lead to serious sanctions. There may be no sanctions at all.

5. Don’t blame your tools, lack of money, lack of staff, or lack of time. Lawyers have always had these issues.

6. If opposing counsel tells you that you’ve cited fake cases, take it seriously. Check, and if you find problems, notify the court promptly.

7. If a court asks you about your use of AI, be honest. If the court you are in has an AI disclosure policy, follow it.

8. Last, please remember, the tools aren’t the problem. Used properly, artificial intelligence can be a boon to the legal profession. Used improperly, as we see here, it can be a disaster.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories