Morgan v. V2X: AI Privacy Protected, But at a Cost

A third federal court weighs in on AI confidentiality, with implications for pro se litigants and small firm attorneys

Recently, two contrary orders from two federal judges caused a lot of consternation about whether AI use by clients is confidential for litigation purposes. Heppner found no confidentiality. Warner found that AI use was protected. Now we have a third order on the issue, this one from Magistrate Judge Maritza Dominguez Braswell in Colorado. The case is Morgan v. V2X, Inc., an employment discrimination lawsuit, and like its predecessors, it is one every attorney should read.

What the Court Held

The plaintiff in Morgan is pro se and was using AI to help litigate his case. The defendant wanted to know which AI tools he was using for confidential data and wanted the court to restrict his AI use going forward. The court agreed on both counts.

On the work product question, the court held that Rule 26(b)(3) protects a pro se litigant’s AI use because it is prepared in anticipation of litigation and may reflect mental impressions, conclusions, opinions, and legal theories. The court distinguished Heppner as a criminal matter and followed Warner, which involved a civil pro se plaintiff in similar circumstances. Critically, the court rejected the idea that submitting information to an AI platform automatically waives work product protection. Using an AI tool is not disclosure to an adversary.

On the tool identification question, the court ordered the plaintiff to name the AI he was using. The plaintiff argued that even this revealed protected strategy. The court disagreed. Naming a tool is not the same as revealing how it was used. I agree. There is nothing particularly confidential about whether someone is using ChatGPT or Word, for that matter.

Why Heppner Got It Wrong

Heppner’s central problem was that the judge focused on the privacy policy of the specific AI tool and concluded there was no reasonable expectation of privacy in consumer-level platforms. At the time, I was concerned that reasoning would undermine privacy expectations across all cloud-based tools.

The Morgan court addressed this issue directly. Nearly all electronic interactions pass through third-party systems. Google hosts millions of accounts and has access to the emails and documents stored in them. Our phones and smart devices collect data constantly. If routing information through a third party eliminates all privacy expectations, then a Gmail account carries no confidentiality. That cannot be right. I don’t think it is what the judge in Heppner intended, but it is the logical conclusion of that opinion.

The Morgan court also made a point that Heppner ignored. AI is different from a search engine, or as I noted in my earlier post, social media. Modern AI platforms are designed to engage, simulate empathy, and invite candid disclosure. People share sensitive information with chatbots in ways they never would with a search bar. If they share on social media, they know it is being shared. But conversations with chatbots? People expect those to be private. This viewpoint matters when analyzing whether AI chats should be accorded privilege.

We can only hope more courts follow Warner and Morgan. I would also encourage courts to extend these protections beyond pro se litigants and recognize the intensely private nature of what people share with AI generally.

The Protective Order Problem

Here is where Morgan gets interesting and worrisome. The court amended the protective order to bar any party from inputting confidential information into an AI platform unless the provider contractually prohibits training on inputs, prohibits third party disclosure beyond service delivery, and allows the user to delete confidential information on request. The party using the tool must retain written documentation of those protections. The practical effect is that mainstream consumer tools are off limits for confidential discovery materials. The court acknowledged this disadvantages pro se litigants but stopped short of finding a solution.

In my opinion, the court made a mistake. The inequality between firms that can afford their own or enterprise level AI tools and those firms that cannot is substantial. The increase in speed and efficiency provided through proper AI is extraordinary and allowing one side to use AI but not the other creates a serious level of unfairness. Especially when concerns about confidentiality can be addressed through proper anonymization.

The same restriction applies equally to pro se individuals and solo and small firm attorneys. If this protective order language appears in your case, a standard ChatGPT account, Claude subscription, or Gemini workspace, is off limits for confidential materials. Big Law has enterprise agreements with data isolation, training prohibitions, and deletion rights built in. A solo practitioner is unlikely to have any of that. The restriction tracks firm size and financial situation as much as it tracks represented versus unrepresented status.

The court also never addressed two practical middle-ground options. First, several AI platforms offer business or enterprise tiers with stronger privacy protections. Whether those tiers satisfy the court’s specific contractual requirements is something practitioners need to verify before relying on them. A paid subscription is not enough. Read your provider’s terms and keep documentation. Second, anonymizing confidential information before entering it into a non-compliant platform could be a workable solution for those who cannot afford enterprise tools. The court never discussed it. Future litigants will test it.

Three Questions to Ask Now About Your Firm’s AI Use

If you are litigating any matter with a protective order, ask yourself:

  1. Does your AI provider contractually prohibit training on your inputs?
  2. Does it prohibit third party disclosure beyond service delivery?
  3. Can you delete confidential information on demand, and do you have that in writing?

If you cannot answer yes to all three, keep confidential materials out of that platform. This is not new, whenever I write or lecture on AI, I tell lawyers to make sure they anonymize any confidential information before entering data into any system that does not provide the proper level of protection. What is new is the notion that even anonymizing the data might not be good enough for a court that does not allow even anonymized data to be entered into non-enterprise level tools. Perhaps the judge in Morgan will revisit this issue and explain whether it is possible to properly anonymize confidential data such that tools that do not provide the level of protection required may still be used.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories