Preserving Privilege and Confidentiality in the Age of AI
When lawyers use ChatGPT or Gemini for client work, they may be inadvertently waiving privilege. Understanding the difference between public and enterprise AI is critical.

Mark Feldner
Co-Founder & CEO, Crimson
The rapid adoption of AI tools has created a significant risk that many law firms are only beginning to grapple with: the use of public AI tools like ChatGPT, Gemini, and Claude for legal work. While these tools are impressive, using them for client work can compromise privilege and confidentiality in ways that may be difficult or impossible to remedy.
The Risk of Public AI Tools
When you input information into a public AI tool, you need to understand what happens to that data:
Training data: Many public AI tools use user inputs to train and improve their models. This means confidential client information could become part of the model's knowledge base and potentially surface in responses to other users.
Data retention: Even tools that don't use data for training may retain inputs for various periods. This data could be subject to subpoena, security breaches, or other disclosure.
Third-party access: Public tools typically involve data flowing through multiple servers and potentially multiple jurisdictions, creating additional exposure points.
Privilege Implications
Legal privilege exists to encourage candid communication between lawyers and clients. But privilege can be waived through disclosure to third parties – and uploading privileged information to a public AI tool may constitute just such a disclosure.
Consider this scenario: A lawyer uploads a draft advice to ChatGPT to help refine the language. That advice is now potentially:
- Part of OpenAI's training data
- Stored on servers outside the firm's control
- Accessible to OpenAI employees for quality assurance
- Subject to US legal process
Has privilege been waived? The law is still developing, but the risk is real and significant.
Confidentiality Breaches
Even setting aside privilege, lawyers have contractual and ethical obligations to protect client confidentiality. Using public AI tools for client work likely violates:
- Professional conduct rules: Lawyers must take reasonable steps to protect client information
- Engagement letter terms: Most engagement letters include confidentiality provisions
- Client information security requirements: Many clients impose specific data handling requirements
A confidentiality breach can result in professional discipline, client claims, and reputational damage.
How Enterprise AI Differs
Enterprise legal AI platforms like Crimson are built differently from public tools:
Data Isolation
Enterprise platforms maintain strict separation between clients and matters. Your data is yours alone – it's not pooled with other firms' information or used to improve the service for other customers.
No Training on Client Data
Reputable enterprise legal AI vendors never use client data to train their models. This is a fundamental principle, not a checkbox feature.
Security Certifications
Enterprise platforms undergo rigorous security audits (like SOC 2 Type II) that verify their data handling practices. Public consumer tools are not designed to meet these standards.
Data Residency
Enterprise platforms typically offer data residency options, allowing you to ensure client data remains within required jurisdictions.
Audit Trails
Enterprise systems maintain detailed logs of who accessed what information and when – essential for compliance and in case of disputes.
Practical Steps for Firms
To manage AI-related risks to privilege and confidentiality:
Establish Clear Policies
Create explicit guidelines about what AI tools can and cannot be used for client work. Many firms now prohibit the use of public AI tools entirely for client matters.
Provide Approved Alternatives
If you ban public tools without providing alternatives, lawyers will work around the rules. Implementing an approved enterprise platform gives lawyers a compliant way to benefit from AI.
Training and Awareness
Ensure all lawyers understand the risks. Many don't realise that pasting text into ChatGPT could compromise confidentiality.
Technical Controls
Consider technical measures to prevent data leakage, such as blocking access to public AI tools from firm networks or implementing data loss prevention software.
Regular Review
AI capabilities and risks are evolving rapidly. Review your policies regularly to ensure they remain appropriate.
The Bottom Line
AI offers tremendous benefits for legal work, but those benefits must not come at the cost of privilege and confidentiality. The distinction between public consumer tools and enterprise legal platforms matters deeply. Understanding it is essential to adopting AI responsibly while protecting your clients and your practice.
When evaluating AI tools for your firm, start with security and confidentiality. Any productivity benefits are worthless if the tool compromises client confidence or exposes your firm to liability.

Mark Feldner
Co-Founder & CEO, Crimson
Co-Founder & CEO at Crimson. Former litigator at Clifford Chance, WilmerHale, and Willkie Farr & Gallagher with 8 years of experience in complex disputes.
Related Articles
Legal AI and the Billable Hour Question
The billable hour concern is understandable, but after hundreds of conversations with lawyers, we've found the reality is more nuanced. Here's why forward-thinking practitioners are embracing AI.
Best PracticesThe Best (and Worst) Use Cases for AI in Litigation
From case file analysis to drafting correspondence, AI excels in some areas of litigation – but falls short in others. A practical guide from eight years in the disputes trenches.
Get Started
Ready to transform your litigation practice?
Join leading law firms using Crimson to streamline document review, build chronologies, and win more cases.
Request a demo