AI and Privilege After United States v. Heppner
A recent flawed ruling on privilege threatens the access to legal services that AI tools can provide.
Does a defendant who used an AI translator retain attorney-client privilege?
Not according to a recent decision from a judge in the Southern District of New York. On Feb. 17, Judge Jed Rakoff issued a written opinion in United States v. Heppner. This first-of-its-kind ruling found that documents created by a criminal defendant using Claude are not protected by attorney-client privilege or the work product doctrine.
While the ruling is correct in its conclusion, Judge Rakoff’s reasoning is problematic and goes beyond what was needed to resolve the case. Because he rested his analysis of a defendant’s right to prepare a defense on a company’s terms of service, Rakoff’s ruling has implications for how AI tools influence the accessibility of legal services.
Here, we examine what the court decided, where it went wrong, and what future courts should do instead.
Judge Rakoff’s Decision in Heppner
The litigation arose after Bradley Heppner, a financial services executive charged with securities fraud, used Claude to prepare for his defense after receiving a grand jury subpoena and retaining counsel. The tools created reports that “outlined defense strategy” and helped him develop arguments based on the facts and the law that the government might use. He later shared those reports with his defense team (who had not directed him to use Claude).
When the FBI raided his home, they seized the documents. His lawyers tried to reclaim the documents by asserting that they were legally privileged. The government argued that the documents were not protected by either attorney-client privilege or the work product doctrine. In his decision, Judge Rakoff agreed, raising the two related legal doctrines.
Attorney-client privilege and work product doctrine are both meant to ensure that people can prepare for proceedings without fear that interactions with lawyers will be used against them. The former protects confidential communications between a client and a lawyer made for the purpose of obtaining legal advice. Rakoff holds, correctly, that there is no attorney-client relationship because Claude expressly disclaims providing legal advice and the existence of any fiduciary obligation. Nor does it seem likely an attorney-client relationship could exist between a person and a piece of software (despite calls from OpenAI’s Sam Altman urging lawmakers to codify such a relationship). Under existing doctrine, this alone is enough to find that Heppner’s documents are not protected by privilege.
Work product doctrine protects materials prepared in anticipation of litigation, typically by or at the direction of counsel. The doctrine is meant to safeguard the adversarial process, with legal representatives advocating for clients before a neutral party. In fact, a magistrate judge in the Eastern District of Michigan—on the same day that U.S. v. Heppner was decided—held that a self-represented litigant’s exchanges with ChatGPT were protected by this doctrine. But because Heppner’s counsel conceded they did not direct him to use Claude, the documents did not reflect counsel’s legal strategy at the time they were created. This is also sufficient to conclude that the documents are not protected from disclosure under the work product doctrine.
Both holdings are sound applications of existing law. For example, consider the following two scenarios, neither of which is AI-specific.
First, if Heppner had Googled “securities fraud defense strategies” or “how to respond to a grand jury subpoena,” compiled the results, and later handed them to his lawyer, these search results would not be privileged. Had Heppner written the same material on a legal pad and the FBI seized it from his desk before he handed it to his lawyer, the result would likely be the same. You do not create privilege by researching your legal situation through a public service even if you later share the printout with counsel.
Second, imagine if Heppner had consulted a knowledgeable, non-lawyer friend. If he told this friend about his legal situation and received their thoughts in return, there would again be no attorney-client privilege because that friend is not a lawyer and owes no legal duty of confidentiality.
Together, these non-AI analogs illustrate why courts are unlikely to protect unmediated conversations with AI chatbots.
Where Judge Rakoff Went Wrong
Although he reached the right conclusion, Judge Rakoff decided more than was necessary to resolve this case. Attorney-client privilege requires a communication between a client and his or her attorney that is confidential and made for the purpose of obtaining legal advice. When Rakoff concluded that Heppner was not using the AI tool to communicate with his lawyer, he could and should have ended his analysis there.
Instead, he went on to rule that Heppner’s use of Claude also challenged the confidentiality of his communications. According to Rakoff, because Anthropic’s privacy policy permits data collection, training, and disclosure to “governmental regulatory authorities,” Heppner did not have a reasonable expectation of confidentiality, so he waived privilege when he shared information with Claude. Rakoff treated Anthropic’s privacy policy as dispositive, elevating its formal terms over the spirit of the law of privilege.
Courts already have a framework for evaluating whether sharing documents with a third-party service eliminates confidentiality. For example, when a law firm or company stores privileged documents in the cloud, as long as a cloud-based platform is facilitating legal services and the vendor has taken reasonable steps to maintain confidentiality, use of that platform does not waive privilege.
American Bar Association (ABA) Formal Opinion 477R reflects this as part of its professional responsibility guidance. The ethics opinion instructs lawyers that they may transmit client information electronically, so long as they take reasonable steps to safeguard it—including conducting due diligence on technology vendors and, where appropriate, discussing security risks with clients. More recently, the ABA extended that framework in Formal Opinion 512 to the lawyer’s use of generative AI tools.
Because Google Workspace, Microsoft 365, and many cloud-based legal technology providers have terms similar to Anthropic’s privacy policy, Rakoff’s decision risks sweeping in any document a client prepared using cloud-based software or that was stored online. And by hinging the confidentiality determination on the contents of a company’s privacy policy, the decision is in tension with past court precedent.
In similar cases, courts have focused on user expectations of confidentiality rather than technical features or policy documents that a user may not know about. For example, a landmark Supreme Court of New Jersey ruling explained that an employee who accessed their personal email on a work computer did not waive privilege because of the employer’s policy on monitoring internet traffic. In an analogous context, the Second Circuit ruled in 2024 that Google’s terms of service, advising users of what the company may review “did not extinguish [defendant’s] reasonable expectation of privacy in his emails as against the government.”
AI tools should be treated the same way as other software tools. If uploading privileged documents to a third-party platform does not automatically eliminate confidentiality, then neither should uploading them to a platform that uses AI or an AI provider as long as the same protective infrastructure is in place. Lawyers’ and clients’ reasonable expectations of confidentiality would not vary between these services, so the law should not either.
The court may have reasoned that the fact that AI is interactive makes it distinguishable from other software providers. While there is some textual evidence for this in the opinion, it is self-contradictory. When Rakoff addresses the professional-relationship element, he treats Claude as a software tool. Yet the framing becomes more interpersonal in the analysis of confidentiality: According to Rakoff, Heppner “communicated with” Claude, “shared” information with it, disclosed his secrets to “a third-party.”
But elsewhere, Rakoff draws on the language of the Second Circuit to entertain the possibility that Claude could function as a Kovel agent—a non-lawyer professional like an accountant retained by counsel. This treats dialogue with a chatbot as more like a disclosure to a third party than an upload to a cloud service.
Distinguishing between dialogue with a chatbot and uploaded documents to a cloud service ignores the underlying technical reality of these services. When a user uploads a document to a third-party legal technology platform or analyzes that document using an AI tool on the platform, that information can be protected by the same technological measures (and might even be the exact same system if the third-party platform is the one hosting the AI model). One potential point of difference with AI tools is that, in some cases, the data is used to train and improve the service. But whether documents being “memorized” in an AI model’s weights during training counts as a disclosure for privilege purposes is an open question that parallels ongoing debates in copyright law. Regardless, a decision that rests the distinction between confidential and nonconfidential communications on the form of the interaction is adopting a fiction that does not reflect the underlying technical reality.
Why This Matters
The practical consequences of the decision become clear when you consider that clients are increasingly using AI to engage with legal matters. A non-English-speaking defendant might use an AI translator to communicate with counsel. Or a client might use AI to organize and summarize their records before a meeting with her lawyer so that their limited time together is productive. After meeting with counsel, a client might use AI to think through the implications of what her lawyer told her—testing arguments, working through factual timelines, and preparing questions for the next session. Moreover, many document and email platforms increasingly incorporate AI tools, so clients may inadvertently share sensitive information with those services when they draft emails or documents to share with their counsel.
Some of these uses might be protected by the work product doctrine if counsel directed them. And self-represented litigants who use AI tools, as one court recently held, are likely protected by the work product doctrine. But the work product doctrine only covers materials prepared in anticipation of litigation or at trial. For the much broader range of situations in which clients consult lawyers—understanding a contract, preparing for a regulatory matter, or organizing documents for a tax question—work product is not available. Attorney-client privilege is the only protection, which Rakoff’s confidentiality analysis undermines.
Because clients who can afford human intermediaries to facilitate communications with counsel—translators, accountants, consultants—retain privilege, Rakoff’s holding exacerbates the resource divide and entrenches the legal profession. Jonah Perlin recently suggested that the best approach for balancing confidential information and the benefits of AI tools was to “avoid using those tools themselves and instead hire lawyers who can use the tools in ways that are more likely to protect their confidential information.”
But the costs that clients have to bear to pay for human equivalents are substantial. Human interpreters for legal settings charge $50–$150 per hour. Legal translation of filings and litigation documents runs $0.20–$0.40 per word. Law firms bill paralegal time for tasks like document organization, research, and case preparation at $100–$200 per hour. And forensic accountants retained for litigation support charge $300–$500 per hour.
AI tools, by contrast, can help clients communicate with their lawyers—translating documents, organizing records, summarizing financial information—and are available for a fraction of those costs.
In other words, those who cannot pay for their lawyers to use AI tools risk losing the benefits. And if they happen to use those tools in the ordinary course, they lose protection, not because of anything they did wrong, but because of an AI provider’s privacy policy. The effect is to tie the scope of confidentiality to a client’s resources and terms of service that few users read and no user negotiates.
What Future Courts Should Do
Rakoff’s first two holdings—that Claude is not a lawyer and that Heppner acted without counsel’s direction—were likely correct as a matter of law. Future courts facing privilege claims about interactions with AI tools would be wise to follow that analysis and stop there.
The confidentiality question, when it does need to be reached, however, should not turn on the AI provider’s terms of service. It should turn on the particular facts of what the defendant did and why: whether the materials were prepared for counsel, whether they reveal privileged communications, and whether the defendant took reasonable steps to maintain their confidentiality. These are fact-specific questions that fit within existing doctrinal frameworks and do not require courts to engage with thorny questions about what AI is or how it is being deployed.
This case also invokes broader policy questions about data use and privilege. One question is related to the terms of service and the conditions under which AI companies should use or retain user data, which we think is best addressed by consumer protection law and further regulation.
Another question is about the role of formal guidance from bar associations. Both ABA opinions on how to use software services appropriately focus on the lawyer as the regulated entity. But because the client’s use of AI is what waived privilege, Heppner finds itself outside of formal opinions. Going forward, the ABA and other bar associations should issue new guidance on whether and how attorneys can direct clients to use AI tools without forfeiting privilege, and a lawyer’s obligation should include advising clients on how to use these tools. The alternative of allowing the confidentiality question to be resolved case by case through terms-of-service analysis is a poor way to develop coherent policy.
Generative AI is a new technology and, like it or not, lawyers and clients are increasingly using it. Under Judge Rakoff’s decision, the scope of a client’s right to communicate with her counsel and prepare a defense in confidence turns on whether she can afford a human intermediary. Courts should not allow the novelty of AI to produce such a result.
