Here's my question:
If the attorney-client privilege, and more importantly, the work product doctrine don't apply here, would they also not apply to direct conversations between an attorney and an AI?
It seems to me that the court would need to apply some twisted logic to claim that those protections apply to an attorney, but not to a petitioner or respondent.
1. "Conversation" is purely anthropomorphism. It's software input and output. If the client makes an excel spreadsheet about the cost benefit of ripping off people, it's not work-product.
But the lawyer's draft damages analysis in excel has always been protected.
2. If we're going to buy the "conversation" conceit, lawyers talking to consulting experts have always had a lot more work product protection than testifying experts.
The lawyer talking to Claude feels like talking to a consulting expert, especially since Claude can't have independent knowledge of facts that would allow it to testify.
The ruling explicitly overrules Shih, thus making exactly that argument:
> Shih, of course, is not binding on this Court, and this Court respectfully disagrees with its holding. As relevant here, the court in Shih principally concluded that the work product doctrine is not limited to materials prepared by or at the direction of an attorney. Id. But that conclusion undermines the policy animating the work product doctrine, which, as one of the cases cited in Shih explains, is "to preserve a zone of privacy in which a lawyer can prepare and develop legal theories and strategy 'with an eye toward litigation.'"
Does that imply that materials produced by the client in conversation with the attorney (e.g. attorney says to client "Ok write here in your own words what happened so I can understand your perspective") are not privileged?
Or would those presumably exist under the umbrella of privacy because they're relevant to the lawyer preparing and developing their legal strategy?
Attorney admitted in NY here. It's fascinating that Judge Rakoff likely would have come to the opposite conclusion if the Claude chat was at the attorney's request or suggestion. I am surprised the court placed so much reliance on the Terms of Service, which are probably not so different than those of Outlook, Gmail, etc., say, yet nobody disputes that attorney-client emails remain privileged notwithstanding the Terms of Service of those providers. At least I have never seen anyone argue in NY that privilege is waived by emailing. And unlike sending an email to another person, chatting with Claude is a solo conversation more like organizing one's notes, which if in contemplation of obtaining legal advice seems privileged to me. I think this is a very close question and am not sure it would come out the same way in other courts or on even slightly different facts. Very interesting legal question.
How is this not effectively a ban on representing yourself in court? The lawyers and judge are going to be using AI. But the layman isn't allowed to use it?
Its no different then if you ask a friend (who is not your lawyer) for advice. You can ask anything you want, it just only gets the special protection if it is actually your lawyer.
If lawyers use it, they may have the ability to claim work product exemption, although this itself is going to be dependent on a lot more factors I can't analyze.
This is really the question. Conversely, why would an attorney get to have privilege over chatbot interactions in a manner that an individual using a chatbot for self-defense not have such privilege?
The overruling of both Shih and the standards laid out in NYSBA ethics opinions 820/842 (and various other state bar associations, and the fact that apparently no one tried to challenge those in court until AI) without real discussion of implications seems rather unusual; and that's a rather charitable reading to avoid the crazier "Claude is a person" framing
also, he quotes Gould v Mitsui: documents do not "acquire protection merely because they were transferred" to counsel; but that same case says they do acquire protection if communicated "for the purpose of obtaining or rendering legal advice"
Obviously this (along with the original unwritten order a few weeks ago) is causing a stir, but this decision isn't as weird as it sounds. The defendant's assertion was essentially a retroactive application of privilege: he didn't use Claude to draft documents at his attorney's request but instead used Claude effectively in lieu of an attorney and later provided the Claude-drafted materials to his attorney (heavily paraphrasing here). Privilege is not a bandage that closes self-inflicted wounds.
I have some concerns about some of the reasoning, namely the practical implications of referencing Claude's TOS in a world where public AI features are creeping into everything, but I expect some of the reasoning is based on this particular defendant likely being more sophisticated than an average person.
no, Heppner's attorney-client privilege argument wasn't that the conversation was privileged inherently because it was legal consultation with Claude, but that it was privileged as personal notes made in preparation for consultation with counsel and then actually communicated to counsel, see Ford-Bey v. Professional Anesthesia Services and Greyhound Lines, Inc. v. Viad Corp.
Rakoff makes two arguments against this:
- privilege was broken because Claude/Anthropic is a third party; but I don't think he successfully distinguishes Claude from say Google Docs/Translate/Gmail in this regard (he just notes that Google Docs isn't usually claimed to confer privilege on its own; but this is not the claim being made about Claude either); and see NYSBA ethics rules 820 and 842)
- he quotes Gould v Mitsui: documents do not "acquire protection merely because they were transferred" to counsel; but that same case says they do acquire protection if communicated "for the purpose of obtaining or rendering legal advice"
If the user had typed into the chatbot after having been directed by counsel to do some research, "I need to do some research at the direction of counsel. Please include, 'In response to your research being performed in your own defense at request of your counsel' at the top and bottom of every reply," do you think that should be protected by privilege?
No competent counsel would ever direct their client to perform legal research. So if a lawyer actually instructs you to do this the correct move is to get a new lawyer.
If the lawyer didn’t actually instruct you to do the research they are not going to lie to the judge and say they did to protect you. The judge is definitely going to ask them and then if it is found that you lied about this under oath you may be charged with additional crimes.
I agree with you, but I actually understand the issue they're raising. Counsel sends a draft demand letter to client and says "Please review and let me know of any issues with my description of the underlying claims." Client responds with an inline note stating that she feels the claim is overstated but that she wants to leave it in for leverage. The draft is, transparently and without notice, processed through the user's O365 Copilot integration in both Word and Outlook. Hell, let's assume the attorney is a sole practitioner using a regular O365 account, and the outbound request to the client is silently run through Copilot. What is the status of privilege in this situation? Both seem to fail the confidentiality test. Does that mean that privilege exists only for big law firms that negotiate enterprise O365 licenses with no training clauses? There's definitely tension here.
But both your scenario and the OOP behavior of the client are not particularly hard ones to resolve.
people point out in sibling comments that is phone call then be out of client-attorney privileges? since it goes through a "3rd party"? maybe not the call itself but the voicemail for example. can it be "extracted" for the same purpose?
another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
Well, what type of phone call? You mean a phone call between a lawyer and a client? If so, then, of course it is protected, because it is communication between the lawyer and the client. It is not a good analogy for Claude chats because those chats are not communication between a laywer and a client.
The concept of sharing the chat with the lawyer will not work, since as the ruling points out, you cannot turn a non-privileged document into a privileged one by sharing it with your lawyer after the fact.
I don't think it's communication at all. Instead, I think it's a kind of lookup. Dealing with an LLM is searching a database. You are looking up legal texts in order to prepare legal arguments.
I think the principled way of treating this is that it's privileged for the purpose of preparing legal arguments, but not privileged in general. I think this can be supported using the existing law.
Presumably a lawyer's Google searches with terms like "what article is X" etc. are privileged too, since they are used for preparing legal arguments. That it uses AI doesn't suddenly make it communication.
> It is not a good analogy for Claude chats because those chats are not communication between a laywer and a client.
How is it not? I get that a chatbot is not a person with rights. And NAL.
But for all intents and purposes, it is a communication about legal advice. The way a lot of people use it is legal advice. They will continue to use it that way.
So for the law to then turn around and say that it's evidence that will be used against them is kind of messed up. It means confidentiality of your case is bought by paying a lawyer for legal protection, not because you actually need their advice over a chatbot's.
It's not a communication if only one human person participates in the conversation. That's just enhanced note-taking and generating. I don't agree with the notion that talking to an LLM is disclosure to a third party because an LLM is neither a natural person nor even an artifical person recognized at law like a corporation, trust, LLC, etc.
It's not a communication with a lawyer, though. Asking a guy on the street if it's illegal to sell the meth you have in your pocket is not privileged communication, and he could definitely testify about that after you got arrested!
The law has a concept of a "carrier" [1], and has the ability to judge whether or not the carrier in question is responsible for what it is carrying.
I'm not making a blanket statement that that means everything is a carrier, because a good chunk of the page I linked is devoted to endless legal nuances and I defer the details of the concept to those who know better. I'm just saying that the law has a well-established concept for this sort of situation, such that it is not the case that just because a third party is involved instantly all protections dissolve. If you really want to dig into the details, that's something an AI that hits the web and digests things would be pretty good at, as long as you're not planning on legal action based on that. Sometimes the hardest part of learning about something is just finding the term for it that lets you dig in.
> another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
This guy made the same argument, but as the court detailed, this is a misunderstanding of attorney-client privilege. Sharing an unprivileged conversation with your lawyer doesn't make it privileged. A phone call to your lawyer is privileged, but a phone call to your cousin Jimbo about what you should tell your lawyer is not.
The Lavabit case years ago was quite scandalous, things have only gotten worse. There should have been much harsher limits on what companies can be compelled to do.
Although a good lawyer can appeal a board order. What the courts will say is unknown, but there are real constitutional questions about ordering everything.
There is a legal distinction between document retention, which is what OpenAI was ordered to do, versus re architecting to generate documents for logless providers.
I highly recommend everyone actually read the opinion. It's such a thorough legal takedown of Heppner, you'll learn how the law works and why it doesn't apply to a lot of the made up cases in this thread:
TLDR:
- Claude told him IANAL
- Claude privacy policies say they "may disclose personal data to third parties in connection with claims, disputes, or litigation"
- Work product doctrine, does not apply in the same way to plaintiffs
- Lawyers did not direct him to use Claude (i.e. the laywers did not direct him to do research for the case using a specific tool)
My takeaway is that, as is, I should not do any work without a VPN or in plaintext. Everything else was up for grabs even before this case.
Is a VPN really going to help here? I guess if you can figure out a way to pay Claude anonymously. But if you are charged with a crime and your computer is siezed, and there is some way to discover your Claude account from the contents of your computer, then you will be up a creek either way.
My takeaway is: don't do crime, and if you must do crime, don't use AI in the commission of a crime, in a similar way as it is unwise for criminals to keep recordings of their own phone conversations or what have you (a surprisingly common habit for criminals!).
That's a great takeaway, but may not be practically achievable in the world where
> The average professional in this country wakes up in the morning, goes to work, comes home, eats dinner, and then goes to sleep, unaware that he or she has likely committed several federal crimes that day.
I don’t think very many people charged with federal crimes are actually just innocent bystanders. So even if we grant that people are technically committing three felonies a day (which I don’t) I think the admonition can simply be read “don’t do crimes that a federal prosecutor might actually charge you with.”
I once saw a talk given by a lawyer on exactly this topic. It was a long time ago, unfortunately I won't be able to find it. Anyway, the takeaway is that there are plenty of Federal laws that are written in such a way that there is incredible room for interpretation by prosecutors. Vagueness and overbroad language to the point that indeed they can come up with some kind of crime pretty much any time they want to.
On the other hand, that kind of thing would not only be enough to bring a case. They use that kind of power to enhance their case against people they know are real criminals. Of course, the more the Justice Department becomes captured by bad actors, the less this applies.
Yes, but he's still using it to prepare his legal arguments and to understand the law.
The reason attorney-client communication is privileged is so that people won't interfere in people's preparation of their case, not because the lawyer is magic. The principled thing is for the courts to apply principles like this based on the principle.
According to the ruling’s citations, the purpose of the privilege is to provide protection for the mind of the advocate. If you’re not the advocate and you’re not talking to the advocate the privilege doesn’t apply. Should-bes in this case are imponderable to me but that appears to be what-is.
There is no way that this state of things survives long-term. Rationally, it's really no different than any other tool involved in production of your work product.
They’d have to pass a Senate bill modifying copyright and granting corporate-nonperson status with legal rights to hosted, certified by the bar, registered and renewed AIs only. Otherwise the work that’s markov’d as ‘legal advice’ has no origination of record from a legally-recognized entity and therefore can’t be affirmed to be legal advice (legal advice is not public domain, or else protections would be drastically weakened; and, provided by A to B test fails: no such entity A), and anyone could claim the entirety of their email as protected from discovery by ‘cc’ing AI’ for legal advice on every email for a vacation responder reply emitted by a self-hosted trepanned agent (a corrupted lawyer can still give protected legal advice).
Or, they’d have to assert that content generated by AI on behalf of a user is protected — there’s no way to tell whether it’s legal advice so it all must be treated as such (can’t trust the AI to judge this, given how hallucinatory they are in legal filings!) — at which point AI companies would be refused the right to harvest your AI conversations for further training and profit-extraction (which would subject them to prosecution for, of all things, illegal wiretap under
§2511(1)(e)(i) if not others). Google would never allow that to happen, seeing as how that’s literally their entire business.
I fully expect someone to set up the equivalent of HIPAA for legal advice AIs and for that to be found acceptable for instances hosted in protected enclaves, but the big four’s main products aren’t likely to qualify for that until they solve hallucinations and earn back judges’ trust.
(I am not your lawyer, this is not legal advice. Ironically, I wouldn’t have to say this if it was AI writing. Heh.)
It's not "no attorney-client privilege for AI chats" in general.
But a situation where the same would also apply if, instead of going to an chat bot, the person had gone to a random 3rd party non-attorney related person.
As in:
- the documents where not communication between the defendant and their attorney, but the defendant and the AI
- the AI is no attorney
- the attorney didn't instruct the defendant to use the AI / the court found the defendant did not communicate with the AI with the purpose of finding legal consule
- the communications with the AI (provider) where not confidential as a) it's a arbitrary 3rd party and b) they explicitly exclude usage for legal cases in their TOS
Still this isn't a nothing burger as some of the things the court pointed out can become highly problematic in other context. Like the insistence that attorney privilege is fundamentally build on a trusting human relationship, instead of a trusting relationship. Or that AI isn't just part of facilitating communication, like a spell checker, word program or voice mail box, legal book you look things up. All potentially 3rd parties all not by themself communication with a human but all part of facilitating the communication.
"Judge Rakoff issued an oral ruling that neither the attorney-client privilege nor the work product doctrine protected the AI-generated documents.12 The decision rests on traditional principles of privilege.
The attorney-client privilege protects (1) communications, (2) among only privileged parties, (3) made for the purpose of providing or obtaining legal advice.13 Importantly, the protection of the attorney-client privilege is lost if the communication is shared outside of the privileged parties.14 The party claiming privilege has the burden of showing that confidentiality was maintained.15 Judge Rakoff stated that the attorneyclient privilege did not apply because the communications were shared with a thirdparty tool that did not maintain confidentiality.16
Second, Judge Rakoff held that the work product doctrine did not protect the documents.17 The work product doctrine protects (1) legal work product, (2) discussing legal strategy, (3) prepared by or at the direction of legal counsel, (4) in anticipation of litigation.18 Judge Rakoff rejected Heppners arguments that the work product doctrine could apply because the AI-generated reports did not reflect the legal strategy of Heppners legal counsel, although they contained theories generated by the client and Claude.19 Since neither Heppner nor the AI tool are legal counsel, and Heppner was not working at the direction of Heppners legal counsel, the materials were not protected by the work product doctrine. Judge Rakoff noted that the AI tools disclaimer that users have no expectation of confidentiality also undermined the work product doctrine claim.20
12 Transcript of Pretrial Conference at 6, United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb 10, 2026).
13 See United States v. Mejia, 655 F.3d 126, 132 (2d Cir. 2011).
14 See In re Six Grand Jury Witnesses, 979 F.2d 939, 943 (2d Cir. 1992).
15 See In re Grand Jury Subpoenas Dated Mar. 19, 2002 and Aug. 2, 2002, 318 F.3d 379, 384 (2d Cir. 2003).
16 Tr. at 3, Heppner, No. 25-cr-00503-JSR.
17 Id. at 6.
18 See In re Grand Jury Subpoenas, 318 F.3d at 383.
No attorney was involved. An AI tool is not a lawyer. It has no law license, owes no duty of loyalty, cannot form an attorney-client relationship, and is not bound by confidentiality obligations or professional responsibility rules. Discussing legal matters with an AI platform is legally no different from talking through your case with a friend.
2
Not for the purpose of obtaining legal advice. Anthropic's own public materials state that Claude follows the principle of choosing the "response that least gives the impression of giving specific legal advice." The tool explicitly disclaims providing legal services. You cannot claim you used a tool for legal advice when the tool itself says it does not provide it. Claude's terms were specifically highlighted by the government, which directly undermined the claim that Heppner was seeking legal advice from the tool.
3
Not confidential. This is the finding with the broadest implications. Anthropic's policy expressly states that user prompts and outputs may be disclosed to "governmental regulatory authorities" and used to train the AI model. Judge Rakoff found there was simply no reasonable expectation of confidentiality. As he put it, the tool "contains a provision that any information inputted is not confidential." This is not unique to Claude. OpenAI's privacy policy contains comparable provisions permitting data use for model training and disclosure in response to legal process.
And the distinction between free and paid plans matters less than many assume. Both Anthropic and OpenAI use conversations from free and individual paid plans (Claude Free, Pro, and Max; ChatGPT Free, Plus, and Pro) for model training by default. Users can opt out, but opting out of training does not eliminate the platforms' rights to disclose data to government authorities or in response to legal process. Only enterprise-tier agreements (ChatGPT Enterprise and Business; Claude's commercial and government plans) exclude user data from training by default and offer contractual confidentiality protections. A $20-per-month subscription does not buy you privilege.
4
Pre-existing documents cannot be retroactively cloaked in privilege. The AI-generated documents were created by Heppner before he transmitted them to counsel. Sending these unprivileged materials to his lawyers after the fact did not retroactively make them privileged.
Implications for waiver of privilege
Heppner fed information he had received from his attorneys into Claude. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications themselves. The privilege belongs to the client, but so does the responsibility to maintain it."
"Privacy policies, including the one on Claude's website, openly inform users how their data is used. However, very few users actually read the fine print on these privacy policies, or even know these policies exist in the first place. It would probably surprise most people to learn that Claude's privacy policy explicitly gives its parent company, Anthropic, the right to disclose a user's data to third parties in connection with legal disputes and litigation."
Definitely not, unless you are acting as your own advocate. Self-hosting does not offer any form of protection. Just like notes you write yourself on your PC, a self-hosted chat could be used as evidence against you.
Heppner's argument was dumb but it opens a field of interesting questions. If I use a document processor (like Google Docs) to compose a message to my attorney, which message itself would be privileged, but I use some sidebar feature of Google Docs/Gemini to clean up a sentence that I thought was clunky, and elsewhere I have, for whatever reason, enabled features that permit Google to use inputs and outputs to train or refine their models, has that destroyed the privilege?
The brief linked above[0] was easy to read. IANAL but in it the author seems to say that online tools fail to meet the confidentiality "test" and explains the ruling in clear language.
Well, hrrm. I thought that because it seemed to start clearly in the document I linked that something like Google docs wasn't safe, regardless of spell checking feature use. However, following my link again, it seems that the referenced post has been edited or something because going to grab a quote, I found myself in a different document than I remember.
I think in hindsight I was remarking, effectively, these two claims (yours and the courts) don't seem to live in the same world. Not your responsibility to resolve my confusion and different parts of the court system can issue edicts that contradict to be later resolved at higher levels of the system so... Sorry I didn't respond within the full context you wrote.
Yes, you lose the privilege if your attorney-client communications are not intended to be confidential. If you agree to share those communications with a third party, you don't intend them to be confidential.
But that communication is clearly intended to be confidential. Also isn't having one attorney on a multi-party communication marked confidential sufficient to create privilege?
When I worked at two different FAANG companies, both legal orientation sessions taught this specific scenario as an example of something that's not attorney-client privileged.
If you email your lawyer to ask legal questions, that's privileged communication.
If you just cc a lawyer on a thread while you talk to other people, adding the lawyer doesn't make the conversation privileged or protected.
That is an erosion of the social contract from the early days of SaaS.
The law in the US is based on the expectation of privacy. If companies and the US government repeatedly egregiously share private data in violation of terms of service and the law, then what expectation is there?
25 years ago, I'd say "Checking the 'do not train on my data' button in an Anthropic account would pretty clearly create an expectation of privacy." These days? OpenAI had to send all such data to the New York Times, the government has been illegally wiretapping the whole planet for decades, the US CLOUD Act exists, and companies retroactively change terms of service all the time.
Heck, Meta has been secretly capturing lewd bedroom videos and paying people to watch them, and it barely made the news, just like the allegations the WhatsApp content moderation team made where they claimed they have access to WhatsApp E2EE content (what other content could they be moderating?!?)
What constitutes "Sharing with a third party" though? Using a 3rd party email service like outlook or gmail? Using a third party docs service like google docs?
It doesn't seem right that google docs would be privileged, but if you use the fancy spellcheck button, it no longer is.
The onus is on the companies to make this clear, if they aren't willing to tell users the dangers of using their own tools that kinda tells you everything you need to know (they don't care about their customers, only $$$).
Be upset at Google for not taking privacy seriously, they never have and never will.
Right, exactly. It is also too much to expect that if a user enabled the "personalization" button in the Gemini app, for unrelated reasons, they now can't expect to compose a privileged email to their counsel. It's a minefield.
Well, at Google people get legal advice from in house lawyers via Gmail. Are they not sharing that with at least some of the Gmail team (who could read the email)?
Gmail users (correctly and reasonably) do not expect the "gmail team" to read their emails, except using glass-breaking incident response privileges that leave audit trails and trigger review. Users expect that email is private. Anyway, both Google's privacy policy and American jurisprudence segregate things like emails, voice calls, and video calls into a separate "communications" category, while Google's privacy policy treats Google Docs as "other content you create", even though the difference seems immaterial if you know how these systems work.
Google originally declared that they read all emails. That was semi-changed with the Workspace rollout but there is nothing preventing them from reverting to the old policy. They already do it anyway for reminders extracted from email.
Nope. The wiretapping laws precedent is known as ‘minimization’; when a legal tap is obtained of your phone lines, the expectation is that every effort will be taken not to tap attorney-client calls, lest your entire evidence packet get thrown out for failure to do so. That precedent is not automatically transitive to AI just because one thinks it ought to be; telephone lines between human beings are protected both by extensive case law and also actual law; neither yet applies between one human and a third-party corporation offering an AI, especially when at least one major AI is contractually declared in shrinkwrap to be ‘for entertainment use only’.
This is a pretty terrible decision and inconsistent with all sorts of all other standards. If I did legal research in Google docs, it'd be covered. If I went to a legal library and took notes, it'd be covered, etc
Here's my question: If the attorney-client privilege, and more importantly, the work product doctrine don't apply here, would they also not apply to direct conversations between an attorney and an AI?
It seems to me that the court would need to apply some twisted logic to claim that those protections apply to an attorney, but not to a petitioner or respondent.
1. "Conversation" is purely anthropomorphism. It's software input and output. If the client makes an excel spreadsheet about the cost benefit of ripping off people, it's not work-product.
But the lawyer's draft damages analysis in excel has always been protected.
2. If we're going to buy the "conversation" conceit, lawyers talking to consulting experts have always had a lot more work product protection than testifying experts.
The lawyer talking to Claude feels like talking to a consulting expert, especially since Claude can't have independent knowledge of facts that would allow it to testify.
The ruling explicitly overrules Shih, thus making exactly that argument:
> Shih, of course, is not binding on this Court, and this Court respectfully disagrees with its holding. As relevant here, the court in Shih principally concluded that the work product doctrine is not limited to materials prepared by or at the direction of an attorney. Id. But that conclusion undermines the policy animating the work product doctrine, which, as one of the cases cited in Shih explains, is "to preserve a zone of privacy in which a lawyer can prepare and develop legal theories and strategy 'with an eye toward litigation.'"
Does that imply that materials produced by the client in conversation with the attorney (e.g. attorney says to client "Ok write here in your own words what happened so I can understand your perspective") are not privileged?
Or would those presumably exist under the umbrella of privacy because they're relevant to the lawyer preparing and developing their legal strategy?
Attorney admitted in NY here. It's fascinating that Judge Rakoff likely would have come to the opposite conclusion if the Claude chat was at the attorney's request or suggestion. I am surprised the court placed so much reliance on the Terms of Service, which are probably not so different than those of Outlook, Gmail, etc., say, yet nobody disputes that attorney-client emails remain privileged notwithstanding the Terms of Service of those providers. At least I have never seen anyone argue in NY that privilege is waived by emailing. And unlike sending an email to another person, chatting with Claude is a solo conversation more like organizing one's notes, which if in contemplation of obtaining legal advice seems privileged to me. I think this is a very close question and am not sure it would come out the same way in other courts or on even slightly different facts. Very interesting legal question.
How is this not effectively a ban on representing yourself in court? The lawyers and judge are going to be using AI. But the layman isn't allowed to use it?
Its no different then if you ask a friend (who is not your lawyer) for advice. You can ask anything you want, it just only gets the special protection if it is actually your lawyer.
I think this means that if lawyers use it, they have also lost confidentiality. That could be a significant issue in a big case.
[Edit: Or maybe not, legally. But they have definitely lost confidentiality in the "corporate secrets" sense, and that may still matter.]
If lawyers use it, they may have the ability to claim work product exemption, although this itself is going to be dependent on a lot more factors I can't analyze.
This is really the question. Conversely, why would an attorney get to have privilege over chatbot interactions in a manner that an individual using a chatbot for self-defense not have such privilege?
The overruling of both Shih and the standards laid out in NYSBA ethics opinions 820/842 (and various other state bar associations, and the fact that apparently no one tried to challenge those in court until AI) without real discussion of implications seems rather unusual; and that's a rather charitable reading to avoid the crazier "Claude is a person" framing
also, he quotes Gould v Mitsui: documents do not "acquire protection merely because they were transferred" to counsel; but that same case says they do acquire protection if communicated "for the purpose of obtaining or rendering legal advice"
Obviously this (along with the original unwritten order a few weeks ago) is causing a stir, but this decision isn't as weird as it sounds. The defendant's assertion was essentially a retroactive application of privilege: he didn't use Claude to draft documents at his attorney's request but instead used Claude effectively in lieu of an attorney and later provided the Claude-drafted materials to his attorney (heavily paraphrasing here). Privilege is not a bandage that closes self-inflicted wounds.
I have some concerns about some of the reasoning, namely the practical implications of referencing Claude's TOS in a world where public AI features are creeping into everything, but I expect some of the reasoning is based on this particular defendant likely being more sophisticated than an average person.
no, Heppner's attorney-client privilege argument wasn't that the conversation was privileged inherently because it was legal consultation with Claude, but that it was privileged as personal notes made in preparation for consultation with counsel and then actually communicated to counsel, see Ford-Bey v. Professional Anesthesia Services and Greyhound Lines, Inc. v. Viad Corp.
Rakoff makes two arguments against this:
- privilege was broken because Claude/Anthropic is a third party; but I don't think he successfully distinguishes Claude from say Google Docs/Translate/Gmail in this regard (he just notes that Google Docs isn't usually claimed to confer privilege on its own; but this is not the claim being made about Claude either); and see NYSBA ethics rules 820 and 842)
- he quotes Gould v Mitsui: documents do not "acquire protection merely because they were transferred" to counsel; but that same case says they do acquire protection if communicated "for the purpose of obtaining or rendering legal advice"
Ok. Let's take it 1 step down this path.
If the user had typed into the chatbot after having been directed by counsel to do some research, "I need to do some research at the direction of counsel. Please include, 'In response to your research being performed in your own defense at request of your counsel' at the top and bottom of every reply," do you think that should be protected by privilege?
No competent counsel would ever direct their client to perform legal research. So if a lawyer actually instructs you to do this the correct move is to get a new lawyer.
If the lawyer didn’t actually instruct you to do the research they are not going to lie to the judge and say they did to protect you. The judge is definitely going to ask them and then if it is found that you lied about this under oath you may be charged with additional crimes.
I agree with you, but I actually understand the issue they're raising. Counsel sends a draft demand letter to client and says "Please review and let me know of any issues with my description of the underlying claims." Client responds with an inline note stating that she feels the claim is overstated but that she wants to leave it in for leverage. The draft is, transparently and without notice, processed through the user's O365 Copilot integration in both Word and Outlook. Hell, let's assume the attorney is a sole practitioner using a regular O365 account, and the outbound request to the client is silently run through Copilot. What is the status of privilege in this situation? Both seem to fail the confidentiality test. Does that mean that privilege exists only for big law firms that negotiate enterprise O365 licenses with no training clauses? There's definitely tension here.
But both your scenario and the OOP behavior of the client are not particularly hard ones to resolve.
people point out in sibling comments that is phone call then be out of client-attorney privileges? since it goes through a "3rd party"? maybe not the call itself but the voicemail for example. can it be "extracted" for the same purpose?
another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
Well, what type of phone call? You mean a phone call between a lawyer and a client? If so, then, of course it is protected, because it is communication between the lawyer and the client. It is not a good analogy for Claude chats because those chats are not communication between a laywer and a client.
The concept of sharing the chat with the lawyer will not work, since as the ruling points out, you cannot turn a non-privileged document into a privileged one by sharing it with your lawyer after the fact.
I don't think it's communication at all. Instead, I think it's a kind of lookup. Dealing with an LLM is searching a database. You are looking up legal texts in order to prepare legal arguments.
I think the principled way of treating this is that it's privileged for the purpose of preparing legal arguments, but not privileged in general. I think this can be supported using the existing law.
Presumably a lawyer's Google searches with terms like "what article is X" etc. are privileged too, since they are used for preparing legal arguments. That it uses AI doesn't suddenly make it communication.
> It is not a good analogy for Claude chats because those chats are not communication between a laywer and a client.
How is it not? I get that a chatbot is not a person with rights. And NAL.
But for all intents and purposes, it is a communication about legal advice. The way a lot of people use it is legal advice. They will continue to use it that way.
So for the law to then turn around and say that it's evidence that will be used against them is kind of messed up. It means confidentiality of your case is bought by paying a lawyer for legal protection, not because you actually need their advice over a chatbot's.
It's not a communication if only one human person participates in the conversation. That's just enhanced note-taking and generating. I don't agree with the notion that talking to an LLM is disclosure to a third party because an LLM is neither a natural person nor even an artifical person recognized at law like a corporation, trust, LLC, etc.
It's not a communication with a lawyer, though. Asking a guy on the street if it's illegal to sell the meth you have in your pocket is not privileged communication, and he could definitely testify about that after you got arrested!
Because as you correctly point out the chatbot is not an attorney. Thus no attorney client privilege.
Government decides not to make it's own ability to make a case and use what you do against any more difficult. More at 11.
The law has a concept of a "carrier" [1], and has the ability to judge whether or not the carrier in question is responsible for what it is carrying.
I'm not making a blanket statement that that means everything is a carrier, because a good chunk of the page I linked is devoted to endless legal nuances and I defer the details of the concept to those who know better. I'm just saying that the law has a well-established concept for this sort of situation, such that it is not the case that just because a third party is involved instantly all protections dissolve. If you really want to dig into the details, that's something an AI that hits the web and digests things would be pretty good at, as long as you're not planning on legal action based on that. Sometimes the hardest part of learning about something is just finding the term for it that lets you dig in.
[1]: https://en.wikipedia.org/wiki/Common_carrier
> another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
This guy made the same argument, but as the court detailed, this is a misunderstanding of attorney-client privilege. Sharing an unprivileged conversation with your lawyer doesn't make it privileged. A phone call to your lawyer is privileged, but a phone call to your cousin Jimbo about what you should tell your lawyer is not.
Are there any model providers that don't log chats? It seems like a good market opening.
I wonder if anybody has gone all the way and made a darknet LLM service with no logs served only over TOR with XMR payments.
None that operate legally will be able to avoid logging chats when ordered to do so.
For example OpenAI were required by a US federal judge to log all chats, and make them discoverable to lawyers representing The New York Times last year. https://www.businessinsider.com/openai-new-york-times-copyri...
Additionally the company can be gagged by a court from disclosing that the chats are being logged, at least in the USA and the UK.
The Lavabit case years ago was quite scandalous, things have only gotten worse. There should have been much harsher limits on what companies can be compelled to do.
Although a good lawyer can appeal a board order. What the courts will say is unknown, but there are real constitutional questions about ordering everything.
There is a legal distinction between document retention, which is what OpenAI was ordered to do, versus re architecting to generate documents for logless providers.
strongwall.ai is logless and supports anonymous payments including physical cash.
So use local or Chinese models instead? Got it.
Related:
https://news.ycombinator.com/item?id=47778308 AI ruling prompts warnings from US lawyers: Your chats could be used against you (reuters.com)
~3 hours ago, 43+ comments
https://news.ycombinator.com/item?id=47555642 Be careful: chatting with AI about your case is discoverable (harvardlawreview.org)
~18 days ago, 13 comments
I highly recommend everyone actually read the opinion. It's such a thorough legal takedown of Heppner, you'll learn how the law works and why it doesn't apply to a lot of the made up cases in this thread:
TLDR:
- Claude told him IANAL
- Claude privacy policies say they "may disclose personal data to third parties in connection with claims, disputes, or litigation"
- Work product doctrine, does not apply in the same way to plaintiffs
- Lawyers did not direct him to use Claude (i.e. the laywers did not direct him to do research for the case using a specific tool)
My takeaway is that, as is, I should not do any work without a VPN or in plaintext. Everything else was up for grabs even before this case.
Is a VPN really going to help here? I guess if you can figure out a way to pay Claude anonymously. But if you are charged with a crime and your computer is siezed, and there is some way to discover your Claude account from the contents of your computer, then you will be up a creek either way.
My takeaway is: don't do crime, and if you must do crime, don't use AI in the commission of a crime, in a similar way as it is unwise for criminals to keep recordings of their own phone conversations or what have you (a surprisingly common habit for criminals!).
That's a great takeaway, but may not be practically achievable in the world where
> The average professional in this country wakes up in the morning, goes to work, comes home, eats dinner, and then goes to sleep, unaware that he or she has likely committed several federal crimes that day.
-- https://www.amazon.com/Three-Felonies-Day-Target-Innocent/dp...
I don’t think very many people charged with federal crimes are actually just innocent bystanders. So even if we grant that people are technically committing three felonies a day (which I don’t) I think the admonition can simply be read “don’t do crimes that a federal prosecutor might actually charge you with.”
That claim by the way, is totally unsubstantiated, and the cases have very questionable applicability to the "average professional".
I once saw a talk given by a lawyer on exactly this topic. It was a long time ago, unfortunately I won't be able to find it. Anyway, the takeaway is that there are plenty of Federal laws that are written in such a way that there is incredible room for interpretation by prosecutors. Vagueness and overbroad language to the point that indeed they can come up with some kind of crime pretty much any time they want to.
On the other hand, that kind of thing would not only be enough to bring a case. They use that kind of power to enhance their case against people they know are real criminals. Of course, the more the Justice Department becomes captured by bad actors, the less this applies.
Yes, but he's still using it to prepare his legal arguments and to understand the law.
The reason attorney-client communication is privileged is so that people won't interfere in people's preparation of their case, not because the lawyer is magic. The principled thing is for the courts to apply principles like this based on the principle.
According to the ruling’s citations, the purpose of the privilege is to provide protection for the mind of the advocate. If you’re not the advocate and you’re not talking to the advocate the privilege doesn’t apply. Should-bes in this case are imponderable to me but that appears to be what-is.
Use Kovel.
https://law.resource.org/pub/us/case/reporter/F2/296/296.F2d...
What if you pay a lawyer whose entire function is to type your questions into Claude?
There is no way that this state of things survives long-term. Rationally, it's really no different than any other tool involved in production of your work product.
FWIW not all cases have gone the same way, so there is likely to be a higher reckoning on this in multiple countries: https://fingfx.thomsonreuters.com/gfx/legaldocs/mypmyjwdzpr/...
> “Plaintiff, as a pro se litigant, has a right to assert work product protection over such material.”
This just argues attorneys have this protection--which is true. Typical plaintiff's do not have the same level of protection.
They’d have to pass a Senate bill modifying copyright and granting corporate-nonperson status with legal rights to hosted, certified by the bar, registered and renewed AIs only. Otherwise the work that’s markov’d as ‘legal advice’ has no origination of record from a legally-recognized entity and therefore can’t be affirmed to be legal advice (legal advice is not public domain, or else protections would be drastically weakened; and, provided by A to B test fails: no such entity A), and anyone could claim the entirety of their email as protected from discovery by ‘cc’ing AI’ for legal advice on every email for a vacation responder reply emitted by a self-hosted trepanned agent (a corrupted lawyer can still give protected legal advice).
Or, they’d have to assert that content generated by AI on behalf of a user is protected — there’s no way to tell whether it’s legal advice so it all must be treated as such (can’t trust the AI to judge this, given how hallucinatory they are in legal filings!) — at which point AI companies would be refused the right to harvest your AI conversations for further training and profit-extraction (which would subject them to prosecution for, of all things, illegal wiretap under §2511(1)(e)(i) if not others). Google would never allow that to happen, seeing as how that’s literally their entire business.
I fully expect someone to set up the equivalent of HIPAA for legal advice AIs and for that to be found acceptable for instances hosted in protected enclaves, but the big four’s main products aren’t likely to qualify for that until they solve hallucinations and earn back judges’ trust.
(I am not your lawyer, this is not legal advice. Ironically, I wouldn’t have to say this if it was AI writing. Heh.)
Previously: https://news.ycombinator.com/item?id=47555642
The headline is a bit misleading.
It's not "no attorney-client privilege for AI chats" in general.
But a situation where the same would also apply if, instead of going to an chat bot, the person had gone to a random 3rd party non-attorney related person.
As in:
- the documents where not communication between the defendant and their attorney, but the defendant and the AI
- the AI is no attorney
- the attorney didn't instruct the defendant to use the AI / the court found the defendant did not communicate with the AI with the purpose of finding legal consule
- the communications with the AI (provider) where not confidential as a) it's a arbitrary 3rd party and b) they explicitly exclude usage for legal cases in their TOS
Still this isn't a nothing burger as some of the things the court pointed out can become highly problematic in other context. Like the insistence that attorney privilege is fundamentally build on a trusting human relationship, instead of a trusting relationship. Or that AI isn't just part of facilitating communication, like a spell checker, word program or voice mail box, legal book you look things up. All potentially 3rd parties all not by themself communication with a human but all part of facilitating the communication.
"Judge Rakoff issued an oral ruling that neither the attorney-client privilege nor the work product doctrine protected the AI-generated documents.12 The decision rests on traditional principles of privilege.
The attorney-client privilege protects (1) communications, (2) among only privileged parties, (3) made for the purpose of providing or obtaining legal advice.13 Importantly, the protection of the attorney-client privilege is lost if the communication is shared outside of the privileged parties.14 The party claiming privilege has the burden of showing that confidentiality was maintained.15 Judge Rakoff stated that the attorneyclient privilege did not apply because the communications were shared with a thirdparty tool that did not maintain confidentiality.16
Second, Judge Rakoff held that the work product doctrine did not protect the documents.17 The work product doctrine protects (1) legal work product, (2) discussing legal strategy, (3) prepared by or at the direction of legal counsel, (4) in anticipation of litigation.18 Judge Rakoff rejected Heppners arguments that the work product doctrine could apply because the AI-generated reports did not reflect the legal strategy of Heppners legal counsel, although they contained theories generated by the client and Claude.19 Since neither Heppner nor the AI tool are legal counsel, and Heppner was not working at the direction of Heppners legal counsel, the materials were not protected by the work product doctrine. Judge Rakoff noted that the AI tools disclaimer that users have no expectation of confidentiality also undermined the work product doctrine claim.20
12 Transcript of Pretrial Conference at 6, United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb 10, 2026).
13 See United States v. Mejia, 655 F.3d 126, 132 (2d Cir. 2011).
14 See In re Six Grand Jury Witnesses, 979 F.2d 939, 943 (2d Cir. 1992).
15 See In re Grand Jury Subpoenas Dated Mar. 19, 2002 and Aug. 2, 2002, 318 F.3d 379, 384 (2d Cir. 2003).
16 Tr. at 3, Heppner, No. 25-cr-00503-JSR.
17 Id. at 6.
18 See In re Grand Jury Subpoenas, 318 F.3d at 383.
19 Tr. at 5, Heppner, No. 25-cr-00503-JSR.
20 Id. at 6."
https://www.debevoise.com/-/media/files/insights/publication...
"Reasons Privilege Failed
1
No attorney was involved. An AI tool is not a lawyer. It has no law license, owes no duty of loyalty, cannot form an attorney-client relationship, and is not bound by confidentiality obligations or professional responsibility rules. Discussing legal matters with an AI platform is legally no different from talking through your case with a friend.
2
Not for the purpose of obtaining legal advice. Anthropic's own public materials state that Claude follows the principle of choosing the "response that least gives the impression of giving specific legal advice." The tool explicitly disclaims providing legal services. You cannot claim you used a tool for legal advice when the tool itself says it does not provide it. Claude's terms were specifically highlighted by the government, which directly undermined the claim that Heppner was seeking legal advice from the tool.
3
Not confidential. This is the finding with the broadest implications. Anthropic's policy expressly states that user prompts and outputs may be disclosed to "governmental regulatory authorities" and used to train the AI model. Judge Rakoff found there was simply no reasonable expectation of confidentiality. As he put it, the tool "contains a provision that any information inputted is not confidential." This is not unique to Claude. OpenAI's privacy policy contains comparable provisions permitting data use for model training and disclosure in response to legal process.
And the distinction between free and paid plans matters less than many assume. Both Anthropic and OpenAI use conversations from free and individual paid plans (Claude Free, Pro, and Max; ChatGPT Free, Plus, and Pro) for model training by default. Users can opt out, but opting out of training does not eliminate the platforms' rights to disclose data to government authorities or in response to legal process. Only enterprise-tier agreements (ChatGPT Enterprise and Business; Claude's commercial and government plans) exclude user data from training by default and offer contractual confidentiality protections. A $20-per-month subscription does not buy you privilege.
4
Pre-existing documents cannot be retroactively cloaked in privilege. The AI-generated documents were created by Heppner before he transmitted them to counsel. Sending these unprivileged materials to his lawyers after the fact did not retroactively make them privileged.
Implications for waiver of privilege
Heppner fed information he had received from his attorneys into Claude. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications themselves. The privilege belongs to the client, but so does the responsibility to maintain it."
https://natlawreview.com/article/your-ai-conversations-are-n...
"Privacy policies, including the one on Claude's website, openly inform users how their data is used. However, very few users actually read the fine print on these privacy policies, or even know these policies exist in the first place. It would probably surprise most people to learn that Claude's privacy policy explicitly gives its parent company, Anthropic, the right to disclose a user's data to third parties in connection with legal disputes and litigation."
https://nysba.org/loose-ai-prompts-sink-ships-how-heppner-sh...
I'm not surprised at all. Corporate LLM chats are saved, used as training corpus, and are definite target for discovery.
Running your own LLM on your own hardware is how you can do this without getting hit with discovery.
And also, you want to run a LLM thats abliterated and larger. And if you connect to the internet, USE A VPN.
I'm guessing a self-hosted chat remains privileged?
Definitely not, unless you are acting as your own advocate. Self-hosting does not offer any form of protection. Just like notes you write yourself on your PC, a self-hosted chat could be used as evidence against you.
Heppner's argument was dumb but it opens a field of interesting questions. If I use a document processor (like Google Docs) to compose a message to my attorney, which message itself would be privileged, but I use some sidebar feature of Google Docs/Gemini to clean up a sentence that I thought was clunky, and elsewhere I have, for whatever reason, enabled features that permit Google to use inputs and outputs to train or refine their models, has that destroyed the privilege?
The brief linked above[0] was easy to read. IANAL but in it the author seems to say that online tools fail to meet the confidentiality "test" and explains the ruling in clear language.
[0] https://news.ycombinator.com/item?id=47779377
I don't know why you think I did not read it. My remark is an application of the 3-point test in the decision to another system.
Well, hrrm. I thought that because it seemed to start clearly in the document I linked that something like Google docs wasn't safe, regardless of spell checking feature use. However, following my link again, it seems that the referenced post has been edited or something because going to grab a quote, I found myself in a different document than I remember.
I think in hindsight I was remarking, effectively, these two claims (yours and the courts) don't seem to live in the same world. Not your responsibility to resolve my confusion and different parts of the court system can issue edicts that contradict to be later resolved at higher levels of the system so... Sorry I didn't respond within the full context you wrote.
Yes, you lose the privilege if your attorney-client communications are not intended to be confidential. If you agree to share those communications with a third party, you don't intend them to be confidential.
But that communication is clearly intended to be confidential. Also isn't having one attorney on a multi-party communication marked confidential sufficient to create privilege?
When I worked at two different FAANG companies, both legal orientation sessions taught this specific scenario as an example of something that's not attorney-client privileged.
If you email your lawyer to ask legal questions, that's privileged communication.
If you just cc a lawyer on a thread while you talk to other people, adding the lawyer doesn't make the conversation privileged or protected.
That is an erosion of the social contract from the early days of SaaS.
The law in the US is based on the expectation of privacy. If companies and the US government repeatedly egregiously share private data in violation of terms of service and the law, then what expectation is there?
25 years ago, I'd say "Checking the 'do not train on my data' button in an Anthropic account would pretty clearly create an expectation of privacy." These days? OpenAI had to send all such data to the New York Times, the government has been illegally wiretapping the whole planet for decades, the US CLOUD Act exists, and companies retroactively change terms of service all the time.
Heck, Meta has been secretly capturing lewd bedroom videos and paying people to watch them, and it barely made the news, just like the allegations the WhatsApp content moderation team made where they claimed they have access to WhatsApp E2EE content (what other content could they be moderating?!?)
What constitutes "Sharing with a third party" though? Using a 3rd party email service like outlook or gmail? Using a third party docs service like google docs?
It doesn't seem right that google docs would be privileged, but if you use the fancy spellcheck button, it no longer is.
The onus is on the companies to make this clear, if they aren't willing to tell users the dangers of using their own tools that kinda tells you everything you need to know (they don't care about their customers, only $$$).
Be upset at Google for not taking privacy seriously, they never have and never will.
Right, exactly. It is also too much to expect that if a user enabled the "personalization" button in the Gemini app, for unrelated reasons, they now can't expect to compose a privileged email to their counsel. It's a minefield.
Well, at Google people get legal advice from in house lawyers via Gmail. Are they not sharing that with at least some of the Gmail team (who could read the email)?
Gmail users (correctly and reasonably) do not expect the "gmail team" to read their emails, except using glass-breaking incident response privileges that leave audit trails and trigger review. Users expect that email is private. Anyway, both Google's privacy policy and American jurisprudence segregate things like emails, voice calls, and video calls into a separate "communications" category, while Google's privacy policy treats Google Docs as "other content you create", even though the difference seems immaterial if you know how these systems work.
Google originally declared that they read all emails. That was semi-changed with the Workspace rollout but there is nothing preventing them from reverting to the old policy. They already do it anyway for reminders extracted from email.
No normal person believes that systems delivering and classifying messages amounts to "gmail team reads my emails".
Right so calling my attorney is the same since I'm sharing the call with the phone company.
Nope. The wiretapping laws precedent is known as ‘minimization’; when a legal tap is obtained of your phone lines, the expectation is that every effort will be taken not to tap attorney-client calls, lest your entire evidence packet get thrown out for failure to do so. That precedent is not automatically transitive to AI just because one thinks it ought to be; telephone lines between human beings are protected both by extensive case law and also actual law; neither yet applies between one human and a third-party corporation offering an AI, especially when at least one major AI is contractually declared in shrinkwrap to be ‘for entertainment use only’.
maybe not the call itself but the voicemail for example. can it be "extracted"?
another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication
I don't think that hot take will survive much contact with the near future, at least not without a good deal of controversy.
What about email?
This is a pretty terrible decision and inconsistent with all sorts of all other standards. If I did legal research in Google docs, it'd be covered. If I went to a legal library and took notes, it'd be covered, etc
Chatting with Claude strikes me as fundamentally different from writing your own notes.
tl;dr Don’t be arrogant, get an attorney so you can enjoy attorney-client privileges. An LLM isn’t an attorney.