Artificial intelligence tools like Grok, developed by xAI, raise important questions about privacy and content moderation, especially for attorneys who must protect client confidentiality under rules like ABA Model Rule 1.6. Lawyers often ask whether their conversations are truly private and how the system handles sensitive or prohibited topics. The short answer? Grok takes a more privacy-respecting approach than many competitors on data use, but like all AI systems, it has automated safeguards (and human oversight where needed) for illegal or harmful content. The lawyer’s favorite phrase applies “it depends” on how you access Grok and what you discuss.
Privacy Basics for Grok Users
Grok’s privacy policy (available at x.ai/legal/privacy-policy) emphasizes transparency and user control. Key points include:
- Conversations are private by default. Chats are not public, not shared with other users, and not visible outside your session or account.
- No automatic use for training. xAI defaults to not using your conversations for training Grok. You must explicitly opt in via Settings > Data > “Improve the Model” if you want to contribute. This applies equally to free and paid users.
- Private or Temporary Chat mode. For extra caution, start a “Private Chat” (look for the ghost icon). These interactions are never eligible for training and are deleted from systems within 30 days.
- Data retention and deletion. xAI keeps data only as long as needed for service, security, or legal reasons. You can delete individual chats or your entire history at any time. Deleted items are removed within 30 days (with exceptions for legal holds).
- No advertising or selling data. Your inputs aren’t used for ads or sold to third parties.
- Access via X (formerly Twitter). If you use Grok on X, X’s separate privacy policy applies, and there may be different opt-out settings for sharing posts/interactions with xAI.
In short, Grok prioritizes user control. Your chats stay private unless you choose otherwise.
Content Moderation and Prohibited Topics
xAI’s Acceptable Use Policy (at x.ai/legal/acceptable-use-policy) prohibits illegal, harmful, or abusive activities, including child sexual abuse material (CSAM), exploitation of minors, non-consensual intimate imagery, and violating privacy rights.
Does Grok “look for and review manually” certain topics, like discussions of child pornography or sexual abuse?
- Automated safeguards first. Like most AI systems, Grok uses classifiers and filters to detect and block prohibited content. Prompts or outputs involving illegal material (e.g., CSAM) trigger refusals, warnings, or blocks.
- No routine manual review of private chats. Normal conversations aren’t manually read by humans. Privacy is maintained unless there’s a safety or legal flag.
- Flagged content may involve review. If automated tools detect potential violations of law or policy (e.g., child exploitation), the content may be reviewed, often by safety teams, for enforcement. This could lead to account actions, reporting to authorities (as required by law), or retention for legal purposes. xAI states it uses tools to ensure compliance with terms and may report illegal activity.
- This is where attorneys must be especially careful. If working in areas that might trigger content review it is critical to only share information that is not confidential and will not cause harm to clients.
Recent Safeguard Failures
Recent incidents (early 2026) highlighted lapses where image generation bypassed filters, allowing inappropriate content involving minors. xAI acknowledged these as safeguard failures, tightened filters, and emphasized that CSAM is illegal and prohibited. Such cases underscore that no system is perfect, but prohibited uses remain strictly off-limits.
For lawyers: Avoid entering confidential client information, Grok already warns against sharing sensitive data. If discussing hypothetical ethics scenarios involving abuse, frame carefully to avoid triggering filters. Real illegal content will be blocked and potentially reviewed/reported.