Over the past several weeks, I have been writing about each of the major AIs. I asked each of them the same thing:
I am writing a blog post for attorneys on what they should know about what attorneys should know about you, that is [name of ai] so they can be aware of what is safe and what is not safe for them to enter. Could you please write a first draft for me?
I then took each of those drafts and used them as a guide to write my own work, checking each of the relevant privacy policies. The results were interesting, especially the differences. This post examines both the differences and the similarities.
The Setup
I ran the prompt across:
- Duck.ai
- Microsoft Copilot on the web, not logged in
- Microsoft Copilot in an enterprise environment (in this case Word)
- ChatGPT not logged in
- Perplexity
- Gemini
- Claude
- Grok
The underlying purpose was not to rank the AIs; it was to see how they described themselves.
What Was Consistent Across Almost All of the AIs
Each AI responded very differently to the prompt, but there were consistencies among them.
Never Enter Confidential Client Information Outside Enterprise Versions
Nearly every tool emphasized some version of this rule. Do not enter confidential information which would violate model rule 1.6. This includes client names, identifying facts, or privileged communications. Also, do not enter work product tied to a client matter. This is the same advice I would give, and it aligns with the various ethics opinions on the subject of AI. The key is that attorneys remain responsible for protecting client confidentiality. Our responsibility to protect our clients’ data does not disappear simply because an AI informs us that it is private.
Verify Everything
Many of the tools warned about hallucinations. Especially fake cases, incorrect citations, and false holdings. They also warned about overly confident sounding but wrong answers. This is excellent advice because quite a few lawyers have already been sanctioned for including hallucinated cases or hallucinated holdings in court filings. The consistent message is that you can use AI to draft, but you should not trust it when it comes to facts, including case citations.
Artificial Intelligence Is a Tool, not a Lawyer
Almost all of the AIs warned that they are not professional judgment and not legal advice. They also warned that they are not a substitute for proper research and are not a replacement for attorneys.
That framing is helpful because it follows existing professional responsibility requirements. That is lawyers must supervise AI and they are accountable for the results, not the AI. This framing is also helpful because it relieves the anxiety so many lawyers have over whether artificial intelligence will replace them. It won’t, not any time soon. What it will do is differentiate between lawyers who use it for efficiency and lawyers who don’t.
The Most Important Divide: Consumer Versus Enterprise
Another clear difference I noted is between consumer and enterprise versions of the tools. A non-enterprise version of an AI has a very different risk profile than an enterprise version such as Microsoft’s or Google’s tenant-governed environment. Several of the tools made this explicit. Some did not.
Additional issues include:
- How permissions are enforced
- Who controls access
- Where the data lives
- What retention rules apply
- Whether the firm has contractual protections
- Whether there is administrative oversight
The enterprise Copilot draft in particular reflected this reality. It was very clear that embedded its AI is powerful precisely because it can see internal documents, that is also what makes it dangerous if misused. Access and risk travel together in the AI world.
The Tools That Refused to Overclaim
Some tools sounded overly confident, which should raise concerns about accuracy. For example, when a tool claims never or always, that is worrisome. Here Copilot stood out again. Copilot’s web draft did not make specific claims about data storage, retention, or training. Instead, it pointed readers to Microsoft’s published privacy statement and declined to summarize or guarantee.
The Marketing Voice Problem
Several responses leaned heavily into privacy language.
- Privacy first
- Anonymized
- Not linked to identity
- Not used for training
- Deleted after X days.
In some cases, the language is grounded in real policies. In other AIs, it seemed more like marketing. In the end, the problem is not that the tools were incorrect, it is that the overly confident language can lull lawyers into skipping a detailed ethics analysis.
For example, even if a tool does not use your data for training, even if it deletes it quickly and does not track you, there are still risks. There also remains the issue of informed consent with clients and the fact that even the most confidential non-enterprise versions will scan for certain words, such as child abuse, and flag them for review by a human, barring a contractual agreement that states otherwise.
Patterns by Tool
Each tool described itself in different ways:
- Duck.ai emphasized privacy posture and anonymity but revealed little.
- Copilot on the web was cautious and pushed readers to Microsoft’s privacy statement.
- Enterprise Copilot emphasized governance, permissions, and supervision.
- ChatGPT focused heavily on confidentiality boundaries and hallucination risk.
- Perplexity leaned into its research and citation model.
- Gemini was very explicit about the difference between free and Workspace accounts.
- Claude was the most ethics focused in tone. It read like it was written for lawyers.
- Grok was the most specific about features and settings.
How Attorneys Should Use AI “Self-Reporting”
Here is the practical guidance.
- First, treat AI self-descriptions as a starting point, not proof.
- Second, default to not entering confidential client information in any external AI system.
- Third, if a tool claims special privacy features, verify them in current documentation and confirm how they apply to your specific account type.
- Remember to watch for changes in policies.
- Fourth, environment matters. A careful prompt in the wrong environment is still risky.
- Fifth, your ethical obligations should never be outsourced to AI.
A Simple Rule Before You Paste Anything
Before entering information into any AI tool, ask yourself:
- Would this identify a client or matter
- Would disclosure to a third party be unauthorized
- Would a client be surprised or upset if you used an AI tool for whatever you entered
- Would you be comfortable explaining the workflow to a disciplinary board
If there is hesitation, the answer is simple, do not enter confidential information into the AI.
The Real Takeaway
This exercise reinforced something I see every day in practice. The risk is not that AI exists. The risk is that lawyers will treat convenience as a substitute for judgment and mistake confidence for accuracy. Do not treat AI as a replacement for attorneys. It isn’t.