Your AI Conversations Might Be Sold. Here’s What You Need to Know

If you use AI Chatbots and have installed certain free browser extensions, your AI conversations might be sitting in a commercial database right now, available for purchase. I know. That’s not what you wanted to read today. But this is exactly the kind of thing you need to know about, especially if you’re a lawyer, a healthcare worker, or anyone who has ever typed something sensitive into an AI chatbot.

What’s Actually Happening

The Register reported this week on research by Lee Dryburgh, an AI visibility expert, who documented how data brokers are selling access to AI chatbot transcripts. This includes conversations containing deeply sensitive personal information. We’re talking about medical diagnoses, HIV lab results, immigration status, domestic violence disclosures, children’s conversations, clinical notes with real patient data, and legal matters with identifying information.

The mechanism is browser extensions. You install something that promises to give you a free VPN, block ads, or speed up your browser. It sounds helpful. It may even work as advertised. But buried in that extension, likely in a privacy policy you never read, is permission to intercept your browser traffic. That includes every prompt you type into an AI chatbot and every response you receive back.

That data gets captured, stored in a searchable database, and sold to paying customers. The companies doing this call the data “anonymized” and claim it’s obtained with consent. But Dryburgh’s research found conversations containing real names, dates of birth, medical record numbers, and diagnosis codes. And we’ve known for years that so-called anonymized data can be re-identified, a task that AI has made considerably easier.

When Something Is Free, You Are the Product

We’ve all heard this concept, usually applied to social media. You don’t pay for Facebook because your attention and your data are what’s being monetized. The same principle applies here, and it applies in a dangerous way.

  • Free ad blockers have to make money
  • Free VPN extensions have to fund their servers
  • Free productivity tools must justify their existence to investors

With any free product, you should ask yourself: what are they getting in return? Most frequently what they are getting is your data.

The troubling reality is that many browser extensions provide almost no information about what they actually collect. Their privacy policies, assuming they exist, are written to be as vague and permissive as possible. Most people install extensions without a second thought, grant whatever permissions are requested, and never look back. That’s exactly what these companies are counting on.

Why This Is Especially Serious for Lawyers and Healthcare Workers

Two findings in Dryburgh’s research should seriously concern every attorney and healthcare provider.

First: healthcare workers are pasting real patient data into AI chatbots. Patient names, dates of birth, medical record numbers, diagnosis codes, all of it ending up in a commercial database. This isn’t a hypothetical HIPAA risk. It is documented behavior according to Dryburgh.

Second: corporate information is flowing into these systems constantly. People are copying internal documents, drafts, client matters, and confidential communications into AI tools to get help with rewrites and summaries. For lawyers, that’s a privilege and confidentiality problem. For anyone, it’s a data security problem.

The research also identified conversations from undocumented immigrants and asylum seekers who asked AI chatbots questions about their legal status. Given the current political environment, having that information sitting in a commercial database accessible to paying customers creates risks that are genuinely frightening.

What You Should Do Right Now

Here’s my practical guidance:

  • Audit your browser extensions today. Open your browser’s extension manager and look at everything installed. Ask yourself: do I know what this does? Do I know who made it? Do I know what data it collects?
  • Remove anything you don’t recognize or don’t actively use. If you can’t identify why it’s there, it shouldn’t be there.
  • Be deeply skeptical of free extensions. If it’s free, research how the company makes money before you install it. Look for a privacy policy and actually read the data collection section.
  • Never paste client information, patient data, or confidential matter details into a consumer AI chatbot. This should be the case regardless of what browser extensions you do or don’t have installed. This is also separate issue from the extension problem, but the research makes clear that people are doing sharing confidential information and the consequences are serious.
  • If you’re using AI tools for work involving sensitive information, use enterprise versions with appropriate data handling agreements, not consumer tools, or make certain to anonymize the data in a way that it can never be connected to an individual. This means leaving out quite a bit of information, including names, birthdates, and location information. It is preferable to use enterprise level tools which tend to have greater privacy protection.
  • Every entity that has confidentiality requirements should provide a written AI policy along with details about which tools are acceptable and training on both the policy and the tools.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories