Fortis v. Krafton: AI, Privilege & Clients Ignoring Legal Advice

The Delaware Court of Chancery issued an opinion on March 16 that deserves attention for reasons that have nothing to do with hallucinated citations. Fortis Advisors, LLC v. Krafton, Inc. is a contract case. AI has a substantial role in the case due to its use by the CEO of a company. It is the AI use that is of interest to me.

The Facts*

Krafton, a South Korean gaming company, paid $500 million to acquire Unknown Worlds Entertainment, the studio behind the game Subnautica. The deal included up to $250 million in contingent earnout payments tied to revenue. It also included a provision that the studio’s founders and CEO retained operational control and could only be fired for cause.

When the next Subnautica title looked like it would be a hit, Krafton’s CEO, Changhan Kim, realized the earnout was going to trigger. Based on the litigation, it seems that he did not want to pay. His own Head of Corporate Development, Hyeri Maria Park, warned him in Slack that firing the founders “with cause” would not eliminate the earnout obligation and would expose Krafton to “lawsuit and reputation risk.” Krafton’s legal department agreed. Allegedly, Kim decided to ignore Park and his lawyers. Instead, he turned to ChatGPT.

What ChatGPT Suggested

The opinion documents the chatbot’s output. It produced a “Response Strategy to ‘No-Deal’ Scenario” with recommendations that included “preemptive framing” to control the fan narrative, “securing control points” like locking down publishing rights, preparation of “systematic materials for legal defense,” and a “two-handed strategy” that paired hardball and softball approaches.

Krafton followed most of ChatGPT’s suggestions. For example, he posted false messages on the studio’s website and locked the studio out of its Steam publishing platform. He also fired all three Key Employees, citing a single reason: game readiness. Then, in litigation, Krafton dropped that reason in favor of new post hoc justifications. The Court of Chancery called the new justifications pretextual and found that Kim was operating under bad faith. It ordered specific performance, reinstated Gill as CEO with full operational authority, declined to return Cleveland and McGuire to their prior roles, and equitably extended the earnout testing period by 258 days.

Two Issues Worth Noting

The first issue is that Kim ignored the people he actually hired to advise him. Park told him what the contract said. The legal department told him what the risks were. Kim ignored advice from humans and asked a chatbot. That chatbot produced a corporate warfare plan that violated the contract Krafton had negotiated. This issue, clients ignoring their lawyers and using AI instead, has become a common problem. A problem that is likely to cause the client substantial harm and could lead to sanctions against the client, and potentially, their attorneys.

The second is that none of Kim’s discussions with the bot were treated as privileged in the instant case. The Delaware Chancery court does not discuss privilege in its order at all. It simply treats the chats as evidence without comment.

This includes:

  • ChatGPT strategy memos
  • Slack threads where Park warned him about his intended course of action
  • The exchanges where Kim complained that the contract was one “under which we can only be dragged around”.

All of this information came into evidence. In a footnote, the court also noted that Kim admitted at trial to deleting “specific, relevant ChatGPT logs.” This is an issue that will be explored in phase two of the case. I suspect that phase two could easily lead to an adverse inference against Kim due to spoliation of evidence.

The Familiar Pattern

Client privilege related to AI use is a developing area. Fortis fits with Heppner, which explicitly found no privilege for a client’s use of AI. In that case, the court held that a criminal defendant’s use of a consumer AI tool, without attorney direction, did not create attorney-client privilege. Nor did it fall under work-product. Work-product privilege was found in two cases, Warner and Morgan, in which the plaintiff was acting pro se.

Another case in which the client chose to ignore their attorney led to a lawsuit against Open AI for practicing law without a license. That case is Nippon Life Insurance Company of America v. OpenAI Foundation. In Nippon, a woman used ChatGPT to try to reopen a settled disability case. She ignored her attorney’s advice and cost Nippon a substantial amount of money to defend against an inappropriate claim. This is becoming a common problem. Some people are treating AI like a confidential advisor that can take the place of legal advice from an attorney. This is a serious mistake. AI is not an attorney; it does not understand context. Frequently it does not understand nuances of the law and will ignore or misstate important issues. In the case of Nippon, the issue was the AI insisting that a woman could reopen her case when it had been dismissed with prejudice due to the settlement agreement.

The Takeaway

When a client turns to a chatbot to work around their own lawyer’s advice, the prompts and the outputs do not stay between the client and the AI. Further, if the advice is mistaken, as it was in Fortis, it can lead to costly disputes. If a client is using AI to strategize around your advice, there are two problems.

  1. They are ignoring you
  2. Their chatbot communications are likely to be discoverable.

It has become critical to discuss the risks of AI use with all clients. This conversation should occur at the start of representation. If the client is someone you have been representing since before the proliferation of AI, it would be wise to have the conversation now. I also recommend including written and verbal warnings about AI use with your engagement agreement. Such warnings should include the fact that AI is frequently incorrect and that there is no privilege when a client uses AI, especially outside the direction of counsel. You might also mention that you will need to charge your full rate to review AI’s recommendations, which could quickly become expensive and use up all of the client’s retainer.

*The facts are as alleged in the court’s opinion.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories