When AI Cites AI: The Misinformation Loop

Pennsylvania does not require lawyers to disclose AI use in court filings. (https://perma.cc/EP4V-U2CC) There is currently no statewide mandatory disclosure rule. Some individual judges have standing orders requiring disclosure. The Pennsylvania Supreme Court has a rule covering court personnel. The Pennsylvania Bar Association and Philadelphia Bar Association issued Joint Formal Opinion 2024-200, which is advisory, not binding.

Google’s AI Overview disagreed. Until earlier this week, a search for “Pennsylvania AI disclosure lawyers” returned a summary explaining that as of August 2024, Pennsylvania mandates explicit disclosure of AI use in all court submissions. It called transparency a filing requirement. None of that was true.

Joe Patrice covered Google’s incorrect summary on Above the Law on May 6. Patrice traced the error to a Paxton.ai vendor post titled 2025 State Bar Guidance on Legal AI. (https://perma.cc/R4BT-3MRH) A second vendor blog post at Twinladder.ai also lists Pennsylvania under a heading called “States With Mandatory Requirements” alongside California and New York. (https://perma.cc/7EM7-R5MR) I will note that while both California and New York are considering various rule changes, as of this writing, neither currently has statewide mandatory AI use disclosure for court filings. (https://perma.cc/EP4V-U2CC) (It is important to note that attorneys should check whether specific courts have disclosure requirements.)

As far as Pennsylvania goes, it would seem that at some point Google’s AI ingested the wrong claim and served it back to searchers as the law of the Commonwealth. Fortunately, as of May 9, 2026, Google appears to have corrected the summary since Patrice’s piece ran. The vendor posts that fed it are still up.

Why Patrice’s Story Worries Me

Here is what worries me beyond what Patrice wrote. We do not know how the Paxton.ai post was originally written. Nor do we know the basis for Twinladder’s claims. I will guess though that AI helped draft both of these posts and that there was a failure to verify the information. That part is speculation. The loop that follows is not.

Whoever wrote the posts could have stopped this at step one. A read of the Pennsylvania Joint Formal Opinion 2024-200 makes clear it is advisory and not binding. A basic search of the Pennsylvania Supreme Court’s rules turns up nothing requiring lawyers to disclose AI in court filings. The same is true for both California and New York. Any number of quick verification steps would have killed the false claims before they left the draft stage. It would seem that no such verification occurred. Failure to verify factual claims from AI is what starts misinformation loops.

The Lack of Verification Loop

The pattern works like this. Someone uses AI to draft an article. The AI invents or distorts something. No one checks, or if they do check, they fail to check enough. The article goes online. Other AI tools index it. The next person who searches finds an AI-generated summary that pulls from the bad article, or they just copy the claim from the article. They write their own article repeating the claim. That article gets indexed too. Now the AI sees the same claim in multiple places and treats the repetition as confirmation. The error gains authority every time it gets copied.

It gets worse. Future models will be trained on all of the incorrect information. The original wrong post, the AI summaries that repeated it, every downstream article that picked up the error, and any new content built on top. The mistake gets baked into the next generation of models, where it will be even harder to dislodge.

A wrong recipe is annoying. A wrong ethics rule can sink a lawyer. A Pennsylvania attorney who reads the Google summary might add disclosure they do not technically owe, which is a relatively small harm. The larger harm occurs when the misinformation involves something that could negatively impact the reader. A lawyer in a different state who reads “PA requires disclosure” and assumes their state is similar might miss something real about their state’s requirements. A CLE presenter who repeats the claim in a slide deck spreads it further. A vendor that puts it in a marketing one-pager spreads it further still. Every repetition strengthens the bot’s confidence. This is the circle I worry about. A circle that both creates and then spreads misinformation. Misinformation that could impact our practice, or even our society, in any number of ways, depending on the claims.

Human in the Loop Isn’t Enough

“Human in the loop” is the standard answer for dealing with potential AI misinformation. This doesn’t address explain both the problem and the solution. Verification that means clicking the first source the AI cites and skimming it does not catch errors that have been laundered through three or four sources before reaching you. By the time you check, the false claim looks confirmed. Multiple sources agree. The Google summary agrees. The vendor blogs agree. They all likely trace back to the same wrong sentence, but you would have to dig to see that.

The fixes, when they happen, are reactive. Google probably patched the Pennsylvania summary after a senior editor at a major legal blog called it out by name. Most errors will not get that treatment. The next bad summary on a less-publicized question stays up until someone with a platform happens to notice.

The Real Fix Takes More Time Than One Click

The fix is harder work than most people want to do, as we already know from the many cases of lawyers failing to check citation. But the answer is straight forward. Use reliable sources. Do not stop at one, unless it is the original source. Go to the original source whenever possible. For a legal rule or case, that means the actual rule, the actual opinion, the actual statute. Not a vendor blog. Not a Google summary. Not an article summarizing a vendor blog. Read the rule itself. If you cannot find the rule, that is a signal that something is wrong. Yes, this is more work. But it is the work the profession has always required. AI has not changed our obligations. AI has just made it easier to skip the required work and harder to notice when you have skipped it.

Don’t Be Discouraged from Using AI to Write

I frequently use AI to draft my blog posts. I don’t want to discourage people from doing so. But the AI-drafted post is never the final result. I substantially rewrite and fact-check the document. If I don’t already know the facts, I make sure to check them outside of AI. I always go to the most authoritative source. That means the original document, whenever possible. If there is no original document, for example an article with no primary source, I check several sources and make certain that the sources I check are reliable. Unfortunately, from time to time that still might not be enough. Sometimes, the press gets it wrong, and in such cases, it can be hard to get to the facts. But it is still our job to try. And we need to do this long before the final article or post ever gets shared. As far as the press getting facts wrong? Well, that’s a blog post for another day.

Note: You will notice that I used Perma links in this article as well as the original links. Perma.cc allows preservation of web pages at a certain time. This way, if the mistaken information addressed in this story is eventually removed from the sites in question, you will still be able to see what was on those sites at the time I wrote this post.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories