When Bradley Heppner was indicted for securities fraud last fall, he used Anthropic’s Claude to help him prepare documents related to his case. He expected those conversations to stay confidential. A federal judge had different ideas.
On February 17, Manhattan U.S. District Judge Jed Rakoff ordered Heppner — the former chair of bankrupt financial firm GWG Holdings, accused of defrauding investors of more than $150 million — to hand over 31 documents generated through his exchanges with Claude. Heppner had argued the chats deserved attorney-client protection. Rakoff wasn’t buying it. “Because Claude is not an attorney,” he wrote, “that alone disposes of Heppner’s claim of privilege.”
The ruling sent a quick ripple through the legal industry. More than a dozen major law firms have issued client advisories since. The message is the same old warning, dressed in new clothes: don’t tell anyone but your lawyer about your case. That now explicitly includes AI.
The reasoning is straightforward, even if the implications aren’t. Attorney-client privilege requires a communication between a client and an attorney. An AI chatbot doesn’t qualify. Claude isn’t a lawyer. And unlike an attorney, these platforms have privacy policies that explicitly allow them to share user data with third parties — including, Rakoff noted, “governmental regulatory authorities.”
That makes AI chats essentially fair game in litigation. Prosecutors and opposing counsel can demand them. Courts can order them disclosed. The more sensitive the topic you discussed with your AI assistant, the more valuable that data could be to someone trying to build a case against you.
There’s a wrinkle. On the same day as Rakoff’s ruling, a federal magistrate judge in Michigan reached the opposite conclusion — treating a self-represented woman’s ChatGPT conversations as personal work product that didn’t need to be disclosed. Two judges, same day, opposite results. That split signals that courts haven’t fully sorted out where AI communications land in existing privilege doctrine, and it won’t stay an open question for long.
Law firms aren’t waiting for the courts to catch up. Their guidance is practical: use closed corporate AI systems when possible, explicitly note in prompts that work is being done “at the direction of counsel,” and treat anything you type into a public chatbot as potentially discoverable.
None of this stops people from using AI for legally sensitive work. It just means the conversations aren’t private — which, if you read the terms of service, was always the case.