
Apr 9, 2026
What a Federal Court Just Decided About AI and Your Secrets
The Digital Confessional
Reading Time:
7 Minutes
Category:
AI for Legal, AI in Education, Future of Work
The digital confessional has no walls, and it is always listening.
In early November 2025, federal agents arrived at Bradley Heppner's home, armed with a search warrant and an indictment for fraud. They seized his electronic devices, as is standard practice in white-collar criminal investigations. But what they found on those devices was far from standard.
Buried in Heppner's files were thirty-one documents generated not by Heppner himself but by Claude, the artificial intelligence platform developed by Anthropic. After learning he was the target of a grand jury investigation, Heppner had turned to the chatbot. Without telling his lawyer, he fed Claude confidential information he had gathered from his legal counsel and asked the machine to outline his defense strategy, anticipating what he might argue regarding the facts and the law. He then shared Claude's outputs with his attorney to help shape his defense.
When the government discovered these files, Heppner's legal team immediately claimed they were protected by attorney-client privilege and the work-product doctrine. They argued that Heppner used the AI solely to communicate with his counsel.
On February 17, 2026, Judge Jed S. Rakoff of the Southern District of New York issued a ruling that sent a shockwave through the legal and corporate worlds. In a decision of first impression nationwide, the court ruled that Heppner's conversations with the AI were not protected. The government was granted full access to the thirty-one documents.
The ruling immediately triggered a flurry of sensational headlines and viral social media posts warning that "using AI waives your legal privilege." But the truth is far more nuanced and far more important for anyone who uses generative AI in their professional life. The Heppner decision does not outlaw AI in legal strategy. Instead, it exposes a fundamental misunderstanding of what artificial intelligence actually is and forces us to reckon with the boundaries of the digital confessional.
Let's talk about Privilege Failure
To understand why Heppner lost his claim of privilege, we must first understand what the privilege is designed to protect. Attorney-client privilege is the oldest of the privileges for confidential communications known to the common law. Its purpose is not to hide evidence, but to encourage full and frank communication between attorneys and their clients, ensuring that legal advice is based on complete information.
To apply for the privilege, there must be a confidential communication between a client and an attorney for the purpose of obtaining legal advice. According to Judge Rakoff, Heppner's use of Claude failed on nearly every front.
First, the court noted the obvious: Claude is not an attorney. Therefore, a conversation with Claude cannot be an attorney-client communication.
Second, the communications were not confidential. When Heppner typed his legal strategy into Claude, he was interacting with a public, consumer-tier AI platform. The court noted that Anthropic's privacy policy explicitly allows the company to collect user data, train its models on that data, and disclose information to third parties, including government authorities, in connection with litigation. By handing his secrets to a platform with such terms, Heppner destroyed any reasonable expectation of confidentiality.
Third, Heppner could not have been seeking legal advice from Claude, because Claude's own programming disclaims the ability to provide formal legal advice. Heppner was acting on his own initiative, not at his lawyer's direction.
The court also dismissed Heppner's claim under the work-product doctrine, which protects materials prepared in anticipation of litigation. Because Heppner acted alone, without his lawyer's instructions, the documents did not reflect his legal counsel's mental impressions or strategy.
The Illusion of the Private Machine
The Heppner case reveals a profound psychological vulnerability in how we interact with artificial intelligence. We have been conditioned by decades of using word processors and search engines to view our screens as private spaces. When we type into a blank document, we feel we are thinking out loud.
Generative AI platforms mimic this intimacy. They use conversational interfaces, adopt helpful personas, and respond with human-like cadence. It is incredibly easy to treat a chatbot as a sentient sounding board, a digital confidant that exists only in the space between the user and the screen.
But this is an illusion. As the New York State Bar Association noted in its analysis of the case, "Both the inputted information and the AI-generated responses are just as discoverable as a Google search." When you type a prompt into a public AI tool, you are not talking to yourself. You are transmitting data to a third-party corporate server, governed by terms of service that prioritize data harvesting over user privacy.
This distinction is what separates a protected thought process from a catastrophic waiver of privilege. If Heppner had typed his defense strategy into a Microsoft Word document saved on his local hard drive, it would likely have been protected as a draft communication to his lawyer. But because he used a public AI platform that actively ingests user data, he effectively broadcast his defense strategy to a third party.
The Kovel Doctrine and the Enterprise Exception
If the Heppner ruling seems to spell the end of AI in legal matters, a closer reading reveals a very different reality. The decision is not a blanket ban on artificial intelligence. It is a highly specific ruling based on how the tool was used, who directed its use, and the platform's terms of service.
In his opinion, Judge Rakoff left a crucial door open. He noted that if Heppner's attorney had directed him to use Claude, the AI might arguably have functioned as a highly trained professional acting as a lawyer's agent. This references the Kovel doctrine, established in a 1961 case in which a court ruled that communications with a non-lawyer accountant could be privileged if the accountant was hired to help the lawyer understand complex financial information. If an attorney directs the use of an AI tool to facilitate legal representation, the communications may remain protected.
Furthermore, the legal community is already drawing a sharp distinction between public consumer AI tools and closed enterprise systems. Public platforms like the free versions of ChatGPT or Claude often reserve the right to train on user data. Enterprise AI systems, however, are governed by strict confidentiality agreements. They do not train on user inputs, and the data remains siloed within the organization's secure environment.
As legal analysts have pointed out, the Heppner court might have ruled very differently if the defendant had used a closed enterprise system that kept the information confidential and inaccessible to the public. The failure was not the use of artificial intelligence; the failure was the use of a public platform with an extractive privacy policy.
The Civil Counterweight: Warner v. Gilbarco
To fully grasp the current legal landscape, we must examine a second case decided on the same day as the Heppner oral ruling. In Warner v. Gilbarco, Inc., a federal magistrate judge in Michigan faced a different scenario. A plaintiff representing herself in an employment discrimination lawsuit used ChatGPT to help draft her legal filings. The corporate defendants demanded access to all of her AI prompts and outputs, arguing that her use of the tool waived any work-product protection.
The Michigan court firmly rejected the demand. The judge ruled that generative AI programs are "tools, not persons," and that entering litigation materials into an AI platform does not constitute disclosure to an adversary. The court recognized that forcing the plaintiff to hand over her AI prompts would expose her internal mental impressions and thought processes, effectively nullifying work-product protection in the modern drafting environment.
The Warner decision serves as a vital counterweight to Heppner. It confirms that when the foundational elements of legal protection are met, the mere involvement of an AI tool does not automatically destroy those protections.
The Architecture of Digital Discretion
The convergence of these two rulings provides a clear roadmap for professionals navigating the AI era. The law is not rejecting artificial intelligence; it is demanding that we treat it with the same rigorous discretion we apply to any other third-party service.
For executives, lawyers, and everyday users, the lessons are immediate and practical. First, we must abandon the illusion that public AI chatbots are private spaces. Any sensitive information, including trade secrets, unannounced mergers, employee grievances, or legal strategies, should never be fed into a consumer-tier AI platform.
Second, organizations must invest in secure, closed-environment enterprise AI systems that contractually guarantee data privacy and prohibit model training on user inputs. The terms of service are no longer just a legal formality; they are the architectural boundaries of your confidentiality.
Finally, we must recognize that artificial intelligence is fundamentally altering the chain of custody for human thought. The Heppner case is a stark reminder that our most private intellectual labor is increasingly mediated by corporate algorithms. As we integrate these powerful tools into our professional lives, we must ensure we are using the machine rather than allowing it to expose us. The digital confessional has no walls, and it is always listening.
References
[1] Guo, E. X. (2026, March 23). United States v. Heppner. Harvard Law Review Blog.




