On the same day in February 2026, two federal courts looked at AI and privilege and came to opposite conclusions. The question both addressed matters to every lawyer using artificial intelligence: does putting information into an AI tool waive privilege? The answer depends on which courtroom you are in, and, in this author’s opinion, on whether the judge understands how the technology actually works.
The SDNY Ruling: United States v. Heppner
On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled from the bench in United States v. Heppner, No. 25-cr-00503, that 31 documents a criminal defendant generated using the consumer version of Anthropic’s Claude were not protected by attorney-client privilege or the work product doctrine. He issued a written opinion on February 17 elaborating on his reasoning.
Here are the facts. Bradley Heppner, a financial services CEO facing securities fraud charges, used the free consumer version of Claude to research his legal situation. He fed in information he had received from his attorneys at Quinn Emanuel, generated dozens of documents, and shared those documents with his defense team. When the FBI searched his home and seized his devices, they found the AI-generated documents. The government moved for a ruling that they were not privileged.
Judge Rakoff agreed. Claude is not an attorney, so no attorney-client relationship can exist between a user and an AI platform. The purpose of Heppner’s communications with Claude was not to obtain legal advice, since Claude itself disclaims providing legal advice. And Heppner had no reasonable expectation of confidentiality because Anthropic’s privacy policy permits the company to collect user inputs and outputs, use that data to train the model, and disclose it to third parties including “governmental regulatory authorities,” even, the court noted, “in the absence of a subpoena compelling [Anthropic] to do so.”
On work product, the court held that the doctrine did not apply because the documents were not prepared by or at the direction of counsel. Heppner acted on his own. Judge Rakoff did note that if counsel had directed Heppner to use Claude, the tool “might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protections of the attorney-client privilege.” That door was left open.
The Michigan Ruling: Warner v. Gilbarco
That same day, Magistrate Judge Anthony P. Patti of the Eastern District of Michigan issued a written ruling in Warner v. Gilbarco, Inc., reaching the opposite conclusion on work product. It is important to note that Magistrate Judge Patti handles discovery disputes every day. Privilege logs, work product fights, ESI protocols, protective orders. That expertise shows in the opinion.
Warner is a civil employment discrimination case. The plaintiff, Sohyon Warner, is representing herself. That is a critical distinction from Heppner. In Heppner, the defendant was a client who chose to use AI on his own, separate from his lawyers. Warner is not just the client. She is acting as her own counsel. She is the one making the strategic decisions, preparing the case, deciding what to research and how to frame it. Her use of AI is her litigation preparation. As a result, unlike in Heppner, there is no gap between the person using the tool and the person managing the case.
Warner had already generated discovery disputes, including an earlier order prohibiting the upload of confidential documents to any AI platform. This time, the defendants, sought production of “all documents and information concerning [the plaintiff’s] use of third-party AI tools in connection with this lawsuit.”
Magistrate Judge Patti denied the request. The information was not discoverable under Rule 26(b)(3)(A), which protects documents “prepared in anticipation of litigation or for trial by another party or its representative.” He also found the request was not relevant, or at best not proportional to the needs of the case under Rule 26(b)(1). Two independent grounds, each sufficient on its own.
The waiver analysis is where the contrast with Heppner is sharpest. In Warner, the Defendants argued that by using ChatGPT, the Plaintiff waived work product protection. Magistrate Judge Patti rejected this, and he did so by drawing a distinction that Judge Rakoff did not make: attorney-client privilege waiver and work product waiver are not the same thing. Citing the D.C. Circuit in United States v. American Telephone & Telegraph Co, the court explained that “while the mere showing of a voluntary disclosure to a third person will generally suffice to show waiver of the attorney-client privilege, it should not suffice in itself for waiver of the work product privilege.” Work product waiver requires disclosure to an adversary or in a way likely to reach an adversary’s hands.
Next the Warner court went further holding that AI is not a third person in the first place. “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.” Defendants could not get to the waiver question because there was no “person” to disclose to. The logic follows: work product waiver is a higher bar than attorney-client waiver; even voluntary disclosure to a third person is not enough on its own; and AI is not a person. Each of those conclusions is independently fatal to Defendants’ argument.
The court agreed with Warner’s argument that Defendants’ motion “asks the Court to compel Plaintiff’s internal analysis and mental impressions, i.e., her thought process, rather than any existing document or evidence, which is not discoverable as a matter of law,” and that the request was “a fishing expedition” seeking “intrusive post-discovery production based on speculation about what might exist in Plaintiff’s internal drafting process, untethered from Rule 26 relevance.” The court noted that Defendants’ theory was “supported by no case law but only a Law360 article posing rhetorical questions.” In a footnote, the court approvingly cited Warner’s observation that “no cited case orders the production of what Defendants seek here: a litigant’s internal mental impressions reformatted through software.”
Elsewhere in the same opinion, in a different context, the court applied the same underlying principle. Citing Upjohn, Judge Patti cited the Supreme Court’s observation that “[f]orcing an attorney to disclose notes and memoranda of witnesses’ oral statements is particularly disfavored because it tends to reveal the attorney’s mental processes,” the court noted that “even attorney interview notes of fact witnesses (if any exist), which are inevitably funneled through the attorney-interviewers’ ears, minds, fingers and/or voices (if dictated), are also protected as work product.” The principle is the same whether information is funneled through an attorney’s mind and fingers or through a litigant’s thought process and a software tool. The medium does not change the analysis.
The court found that Defendants had produced no evidence that Warner had actually violated the protective order by uploading confidential documents to an AI platform, despite what the court called an “inordinate amount of questioning about Plaintiff’s use of AI” during her deposition. The court’s assessment was blunt: “Defendants’ preoccupation with Plaintiff’s use of AI needs to abate.”
AI Is Not Social Media
In this author’s opinion, the Heppner decision has a serious weakness. Judge Rakoff’s confidentiality analysis applies a framework that might make sense for social media but does not fit the way AI tools actually work.
Those of us who have worked on social media discovery issues for years have watched courts navigate the tension between the public nature of social media and the limits of discovery. Even in that context, where people are voluntarily posting information for others to see, courts have consistently pushed back on overbroad requests and fishing expeditions. The reasonable expectation of privacy in social media cases turns on what the user actually shared and with whom, not on a blanket reading of Facebook’s terms of service.
AI is a completely different animal. When someone types a prompt into Claude or ChatGPT, they are not publishing anything. They are not posting to a timeline or sharing with friends or followers. They are having a one-to-one interaction with a software tool in what is functionally a private workspace. The entire design of the product is built around individual, private use. This is not social media. AI chats are not meant to be public.
Judge Rakoff’s confidentiality finding rested heavily on Anthropic’s privacy policy, which states that user inputs may be used to train the model and may be disclosed to third parties and government authorities. From that, the court concluded that Heppner had no reasonable expectation of confidentiality. But this analysis does not account for how the technology actually works.
Even on the free consumer version of Claude, users can turn off training in their settings. When training is enabled, the data is anonymized and used to improve the model. It is not published, not searchable, not accessible to other users. Anthropic also offers enterprise and API versions that do not train on user data at all and include contractual confidentiality protections. The Heppner court did not appear to consider any of these distinctions.
This is where Judge Rakoff’s reasoning runs into a bigger problem. Every major cloud service has privacy policies that reserve similar rights. Google, Microsoft, Apple, Dropbox, and others all include language permitting disclosure to government authorities and third parties. Beyond that, federal law requires these providers to scan for child sexual abuse material (CSAM) and report findings to the National Center for Missing and Exploited Children. AI platforms have similar safety processes: automated systems flag content involving potential harm to children, threats of violence, and similar concerns, and in some cases that content can be escalated to human review. These safety and legal compliance obligations exist across the entire cloud ecosystem.
No court has ever held that these processes destroy the reasonable expectation of confidentiality for purposes of privilege. If Rakoff’s standard were applied consistently, uploading a privileged memorandum to Google Drive or sending it through Gmail would also waive privilege, because those services have the same types of terms and the same types of legal obligations. Such a result would be problematic. But that is where the reasoning leads if you do not distinguish between a privacy policy’s legal boilerplate and the actual reality of how a service is used. Taken to its logical conclusion, Rakoff’s framework would make it essentially impossible to maintain privilege on any document stored or transmitted through any cloud-based service.
Judge Rakoff is one of the most respected federal judges in the country, and the Heppner opinion is not unreasonable on its face. But the confidentiality analysis would have benefited from a deeper engagement with the technology. Magistrate Judge Patti, whose daily work requires exactly the kind of granular discovery analysis at issue here, took a more grounded approach. He looked past the tool to the function: what was the person doing with it, and does forcing disclosure reveal their mental processes? The depth and quality of the analysis in each opinion tracks with each judge’s daily expertise. The judge who lives in discovery got the technology right.
What This Means for Practitioners
These two decisions are going to be cited frequently and in opposition to each other. They are not perfectly analogous. Heppner is a criminal case involving a client who acted without counsel’s direction. Warner is a civil case involving a pro se litigant who is both the client and the pro se counsel, and whose AI use is inseparable from her litigation strategy. But the core tension is real. Is AI a third party to whom you are disclosing confidential information, or is it a tool through which you are processing your own work product? The answer has enormous consequences.
It is worth noting that the strongest ruling protecting AI work product so far came in defense of a pro se plaintiff against a major employment defense firm. This was not a sophisticated enterprise AI setup with contractual protections. It was a person using ChatGPT to help prepare her own case. And the court still said no. This is her thought process. You cannot have it.
Until this area of law settles, and it will take more than two opinions for that to happen, practitioners should take the following steps:
1. Know your platform. There is a meaningful difference between consumer AI with training enabled, consumer AI with training disabled, and enterprise AI with contractual confidentiality. If you are using AI for anything involving client information or litigation strategy, you should know exactly which version you are using, and its data practices.
2. Use enterprise tools when privilege matters. If you are working on matters where privilege is at stake, use an enterprise-grade AI tool that does not train on your data and provides contractual confidentiality protections. This is the simplest way to avoid the problem Judge Rakoff identified.
3. Talk to your clients about AI. Ask clients what AI tools they are using. Find out whether training is on or off. Give them clear, written instructions about what is and is not appropriate for sensitive information. If you want AI-assisted work to be protected, direct it yourself and document that direction. Judge Rakoff left the door open for AI use directed by counsel to qualify under the Kovel doctrine. Do not leave that to chance.
4. Update your firm’s AI policies. If your firm does not yet have a policy addressing which AI tools may be used for client-related work and under what circumstances, these two decisions make clear that you need one. The policy should address which platforms are approved, whether training must be disabled, and what types of information may and may not be entered into AI tools.
5. Watch this space. These are early decisions. More courts will weigh in, and the distinctions between criminal and civil cases, between consumer and enterprise tools, and between attorney-directed and client-initiated AI use will be refined over time. This is an area where staying current is not optional.
Conclusion
The question of whether AI use waives privilege is not going to be answered by a single opinion, no matter how prominent the judge. What we have right now are two competing frameworks: one that treats AI platforms as third parties whose terms of service can destroy confidentiality, and another that treats them as tools, “not persons,” through which a litigant processes their own mental impressions.
In this author’s opinion, the Warner v. Gilbarco approach is more sound. Magistrate Judge Patti asked the right questions. What is the function of the tool? Does forcing disclosure reveal the litigant’s mental processes? Is there any actual basis in the law for treating AI use as a waiver? His answers were grounded in decades of work product doctrine, from Upjohn through the D.C. Circuit’s AT&T decision, applied with the precision of a judge who draws these distinctions for a living. Judge Rakoff’s approach risks treating boilerplate privacy policy language as fatal to privilege in a way that, if taken to its logical conclusion, would make cloud computing incompatible with confidentiality.
Magistrate Judge Patti was right: litigants who try to turn an opponent’s use of AI into a backdoor to their work product should not expect courts to play along. In the meantime, know your tools, use the right version for the job, talk to your clients about what they are using, and do not assume that privilege will protect you if you have not taken basic steps to protect it yourself. A sample client AI confidentiality notice is available here.