The Confidentiality Problem
Attorney-client privilege is the cornerstone of legal practice. When a lawyer pastes client details into ChatGPT — names, case numbers, financial records, medical histories — that information is transmitted to a third-party server. Under most bar association rules, this constitutes a disclosure that may waive privilege.
Unlike a conversation with a paralegal or co-counsel, data sent to an AI provider is processed, stored, and potentially used for model training. OpenAI, Anthropic, and Google all retain conversation data for varying periods, and their employees may review it for safety purposes. This fundamentally changes the confidentiality equation.
What Bar Associations Say
Legal ethics bodies around the world have started issuing guidance on AI use in law:
- ABA Formal Opinion 512 (2024) — The American Bar Association confirmed that lawyers have a duty of competence when using AI tools, including understanding how client data is processed and stored.
- State bar guidance — California, Florida, New York, and other state bars have issued ethics opinions requiring lawyers to assess AI tools for confidentiality risks before using them with client data.
- EU legal profession rules — European bar associations have emphasized that GDPR obligations apply to lawyer-client data shared with AI services, adding a regulatory layer on top of professional ethics.
- Polish Bar Council (NRA) — Has warned that sharing client data with cloud-based AI tools may violate the professional secrecy obligation (tajemnica adwokacka), which in Polish law is nearly absolute.
The consensus is clear: lawyers can use AI, but they must take active steps to protect client information.
Real Cases: When Lawyers Got It Wrong
The Mata v. Avianca incident
In 2023, New York attorney Steven Schwartz used ChatGPT to research case law for a federal court filing. The AI generated fabricated case citations that did not exist. The court sanctioned Schwartz and his firm. While this case is primarily about hallucinations, it also exposed that the lawyer had submitted client case details to a third-party AI service without safeguards.
Law firm data exposure
Multiple law firms have reported internal incidents where associates pasted client contracts, settlement terms, or witness statements into AI chatbots. In one documented case, a junior lawyer pasted an entire merger agreement — including confidential financial terms — into ChatGPT to get a summary. The firm discovered the breach during an internal audit weeks later.
Court filing leaks
Several courts now require lawyers to disclose whether AI was used in preparing filings. Judges in the Southern District of New York, the Northern District of Texas, and courts in the UK have all implemented such requirements. This creates an additional risk: if you used AI with client data and did not protect that data, the disclosure requirement may expose the breach.
Why "Just Don't Use It" Is Not the Answer
Some firms have banned AI chatbots entirely. But this approach has significant drawbacks:
- Competitive disadvantage. Firms that use AI effectively can draft documents faster, research more thoroughly, and deliver better value to clients. A blanket ban puts a firm behind its competitors.
- Shadow AI. When firms ban tools without providing alternatives, lawyers use them anyway — on personal devices, without oversight. This creates even greater risk because there is no institutional control or audit trail.
- Client expectations. Corporate clients increasingly expect their law firms to use AI tools for efficiency. Some RFPs now specifically ask about AI capabilities.
The practical solution is not to avoid AI but to use it safely — with proper data protection in place.
Practical Steps for Law Firms
- Establish an AI policy. Define which tools are approved, what types of data can be submitted, and what protections are required. Make this part of onboarding for new associates.
- Anonymize before submitting. Replace client names, case numbers, dates, financial figures, and any identifying information with placeholders before pasting into AI tools. This preserves the analytical value while eliminating the confidentiality risk.
- Use enterprise-grade tools. ChatGPT Enterprise, Claude for Business, and similar products offer contractual guarantees that data will not be used for training. However, they still involve third-party storage.
- Automate the anonymization process. Manual redaction is slow and error-prone. A single missed name or case number can compromise privilege. Automated tools that detect and mask PII before it leaves the browser are faster and more reliable.
- Verify AI outputs. Always check citations, legal reasoning, and factual claims. AI models hallucinate, and submitting fabricated citations to a court has career-ending consequences.
- Document your process. Keep records of what AI tools you use, what data protection measures are in place, and how you verify outputs. This protects you if a client or ethics board questions your practices.
How Private Prompt Helps Legal Professionals
Private Prompt was designed with exactly this use case in mind. The extension automatically detects sensitive data in your prompts — client names, case numbers, financial amounts, addresses, phone numbers — and replaces them with anonymous placeholders before the text reaches any AI provider.
All processing happens locally in your browser. No client data is transmitted to any external server. When the AI responds, the extension restores the original values so you see the full context. This means you get the full benefit of AI assistance without any confidentiality risk.
For law firms, this is the difference between a usable AI policy and a paper ban that everyone ignores.
Protect Client Privilege When Using AI
Private Prompt anonymizes client data automatically before it reaches any AI chatbot. Attorney-client privilege stays intact.
Learn More About Private Prompt
Private Prompt