The Confidentiality Problem

Attorney-client privilege is the cornerstone of legal practice. When a lawyer pastes client details into ChatGPT — names, case numbers, financial records, medical histories — that information is transmitted to a third-party server. Under most bar association rules, this constitutes a disclosure that may waive privilege.

Unlike a conversation with a paralegal or co-counsel, data sent to an AI provider is processed, stored, and potentially used for model training. OpenAI, Anthropic, and Google all retain conversation data for varying periods, and their employees may review it for safety purposes. This fundamentally changes the confidentiality equation.

What Bar Associations Say

Legal ethics bodies around the world have started issuing guidance on AI use in law:

The consensus is clear: lawyers can use AI, but they must take active steps to protect client information.

Real Cases: When Lawyers Got It Wrong

The Mata v. Avianca incident

In 2023, New York attorney Steven Schwartz used ChatGPT to research case law for a federal court filing. The AI generated fabricated case citations that did not exist. The court sanctioned Schwartz and his firm. While this case is primarily about hallucinations, it also exposed that the lawyer had submitted client case details to a third-party AI service without safeguards.

Law firm data exposure

Multiple law firms have reported internal incidents where associates pasted client contracts, settlement terms, or witness statements into AI chatbots. In one documented case, a junior lawyer pasted an entire merger agreement — including confidential financial terms — into ChatGPT to get a summary. The firm discovered the breach during an internal audit weeks later.

Court filing leaks

Several courts now require lawyers to disclose whether AI was used in preparing filings. Judges in the Southern District of New York, the Northern District of Texas, and courts in the UK have all implemented such requirements. This creates an additional risk: if you used AI with client data and did not protect that data, the disclosure requirement may expose the breach.

Why "Just Don't Use It" Is Not the Answer

Some firms have banned AI chatbots entirely. But this approach has significant drawbacks:

The practical solution is not to avoid AI but to use it safely — with proper data protection in place.

Practical Steps for Law Firms

  1. Establish an AI policy. Define which tools are approved, what types of data can be submitted, and what protections are required. Make this part of onboarding for new associates.
  2. Anonymize before submitting. Replace client names, case numbers, dates, financial figures, and any identifying information with placeholders before pasting into AI tools. This preserves the analytical value while eliminating the confidentiality risk.
  3. Use enterprise-grade tools. ChatGPT Enterprise, Claude for Business, and similar products offer contractual guarantees that data will not be used for training. However, they still involve third-party storage.
  4. Automate the anonymization process. Manual redaction is slow and error-prone. A single missed name or case number can compromise privilege. Automated tools that detect and mask PII before it leaves the browser are faster and more reliable.
  5. Verify AI outputs. Always check citations, legal reasoning, and factual claims. AI models hallucinate, and submitting fabricated citations to a court has career-ending consequences.
  6. Document your process. Keep records of what AI tools you use, what data protection measures are in place, and how you verify outputs. This protects you if a client or ethics board questions your practices.

How Private Prompt Helps Legal Professionals

Private Prompt was designed with exactly this use case in mind. The extension automatically detects sensitive data in your prompts — client names, case numbers, financial amounts, addresses, phone numbers — and replaces them with anonymous placeholders before the text reaches any AI provider.

All processing happens locally in your browser. No client data is transmitted to any external server. When the AI responds, the extension restores the original values so you see the full context. This means you get the full benefit of AI assistance without any confidentiality risk.

For law firms, this is the difference between a usable AI policy and a paper ban that everyone ignores.

Protect Client Privilege When Using AI

Private Prompt anonymizes client data automatically before it reaches any AI chatbot. Attorney-client privilege stays intact.

Learn More About Private Prompt