Three Courts, No Consensus: The Evolving Privilege Landscape for GenAI-Generated Legal Materials
Quick Read Summary
Three recent federal court decisions have created some uncertainty about whether materials created using generative AI tools are protected by attorney-client privilege or the work product doctrine. In United States v. Heppner, the court held that a party’s use of a consumer AI platform without attorney direction vitiated any privilege protection. However, in Warner v. Gilbarco, Inc. and Morgan v. V2X Inc., courts reached more protective conclusions, holding that AI tools are instruments, not third parties, and that using them does not automatically waive work product protection. These conflicting rulings highlight immediate risks for corporate entities and employees who use AI to research legal issues or prepare litigation-related materials.
Despite divergent conclusions in some respects, these cases taken together suggest that entities and employees should consider:
- Using enterprise AI platforms with contractual confidentiality protections when handling privileged or sensitive information; consumer-grade tools present heightened risk.
- Ensuring attorney involvement and documentation for any AI-assisted legal work to strengthen privilege claims.
- Adopting clear internal policies governing AI use in legal, regulatory, and litigation contexts, and training employees accordingly.
- Anticipating discovery requests targeting AI use and being prepared to assert work product protection where appropriate.
What Happened?
In February, we published an alert analyzing the Southern District of New York’s decision in United States v. Heppner, in which Judge Jed Rakoff held that a criminal defendant’s exchanges with the consumer version of the generative AI tool Claude were protected by neither the attorney-client privilege nor the work product doctrine under the circumstances presented there. That ruling drew national attention and prompted urgent questions about the risks of using GenAI tools in connection with legal matters. Since then, two additional federal court decisions have weighed in on the same core questions and reached markedly different conclusions. This follow-up alert summarizes those developments, explains what the emerging split means for employers and provides updated practical guidance.
Developing Case Law: A Three-Decision Snapshot
United States v. Heppner (S.D.N.Y., February 17, 2026)
As we described in our prior alert, the defendant used a consumer AI platform, without counsel’s direction, to research legal issues and draft defense reports after receiving a grand jury subpoena, later sharing these materials with his attorneys. The court denied privilege because the platform was not an attorney, the platform’s privacy policy permitted disclosure to third parties (including government authorities), and the defendant had no reasonable expectation of confidentiality. The court also rejected the application of the work product doctrine, finding that the materials were not prepared by or at counsel’s direction and did not reflect counsel’s mental impressions.
Warner v. Gilbarco, Inc. (E.D. Mich., February 10, 2026)
One week before Heppner, a magistrate judge in the Eastern District of Michigan reached a different conclusion. In Warner v. Gilbarco, an employment discrimination case, defendants moved to compel a pro se plaintiff to produce materials reflecting her AI platform use in the litigation. The court denied the motion, holding that the plaintiff’s AI-assisted materials were protected work product prepared in anticipation of litigation. Critically, the court rejected the waiver argument, reasoning that “ChatGPT (and other generative AI programs) are tools, not persons.” Waiver requires disclosure to an adversary or in a manner likely to reach one, neither of which occurs when using an AI platform. The court characterized the motion as “a fishing expedition” unsupported by case law.
Morgan v. V2X Inc. (D. Colo., March 30, 2026)
The most recent decision, Morgan v. V2X Inc. (D. Colo.), involved a pro se plaintiff in an employment discrimination case. The defendant sought to amend the protective order with AI-specific restrictions and compel disclosure of which AI platform Morgan used. Morgan argued his tool selection was protected work product. The court granted the motion in part.
The court held that FRCP 26(b)(3) protects work product prepared by pro se litigants. It distinguished Heppner as a criminal case not governed by the Federal Rules and noted the absence of a party-attorney gap when a pro se litigant serves as both.
On confidentiality, Morgan rejected Heppner’s reasoning, holding that intermediary access to data does not automatically waive protections. However, the court held the identity of the AI tool was not protected and ordered disclosure. It also crafted an AI-specific protective order: no party may input confidential information into an AI platform unless the provider contractually prohibits using inputs for training or disclosing them to third parties.
Decisions Mean No Uniform Rule for Corporate Entities and Employers But Key Patterns Are Emerging
These three rulings confirm there is no consensus on privilege and work product protections for GenAI materials. Heppner treated consumer AI as a third party whose terms destroyed confidentiality; Warner and Morgan viewed AI tools as ordinary drafting instruments. Outcomes turned on specific facts: who used the tool, under whose direction, in what proceeding, and under what terms of service.
Though the facts and circumstances of each case differ, certain patterns are emerging. All three courts applied traditional doctrines without AI-specific modifications and agreed that AI use alone does not automatically waive work product protection. An emerging disagreement is whether attorney direction is required: Heppner says yes for a represented party’s unilateral AI use; Warner and Morgan say no for pro se litigants. Neither civil court addressed what happens when a represented party uses AI without counsel’s direction, a significant open question.
Updated Practical Takeaways
In light of these developments, corporate entities and employers should consider:
- Defaulting to enterprise GenAI platforms. When privileged or sensitive information is involved, use only enterprise AI solutions that contractually prohibit training on user inputs and restrict third-party disclosure. The consumer-versus-enterprise distinction was central to Heppner’s reasoning and may be outcome-determinative.
- Ensuring counsel directs and documents AI-assisted legal work. The Heppner court suggested the outcome might have differed had counsel directed the AI use. Any GenAI use related to litigation or legal advice should be initiated and supervised by counsel, with that direction memorialized in writing.
- Adopting or updating internal AI use policies. Policies should specify approved AI tools, prohibit inputting privileged or confidential information into consumer platforms, and require employees to consult legal counsel before using AI to analyze legal exposure.
- Training employees broadly, not just lawyers. As Heppner illustrates, privilege risk arises from employee conduct, not just attorney conduct. Employees involved in litigation, investigations, or regulatory matters need training on how AI use affects privilege.
- Reviewing and updating privilege log practices. Logs should articulate the basis for any privilege or work product assertion involving AI-generated materials, confirm counsel’s involvement, and document that the tool was used with an expectation of confidentiality.
- Understanding that AI tool selection is now a litigation decision. Under Morgan, the identity of the AI tool may need to be disclosed and is not itself protected work product. Whether a platform is enterprise or consumer-grade, and its terms of service regarding confidentiality, may determine whether privilege attaches. If Morgan’s protective order provisions become standard, vendors will need to provide contractual assurances that they do not use inputs for training, do not disclose them to third parties, and can delete data on request.
- Anticipating discovery requests targeting AI use. Opposing counsel may seek discovery of GenAI prompts, outputs, and activity logs. While Warner and Morgan suggest such requests may be denied, Heppner shows courts may order production where privilege elements are unsatisfied. Be prepared to respond and, where appropriate, seek protective orders.
Looking Ahead
The rapid pace of these decisions – three rulings in less than two months – signals that courts will continue to confront these questions with increasing frequency as GenAI becomes embedded in legal workflows. At least for now, courts are not creating new privilege rules for AI; they are applying longstanding principles to new technology, and the outcomes depend heavily on the facts.
Whether future courts align with Heppner’s more restrictive approach or with the more protective reasoning of Warner and Morgan may depend on additional factors that have not yet been fully tested, including the use of enterprise platforms with contractual confidentiality protections, the role of attorney direction in AI-assisted workflows, and the application of these principles to represented parties in civil litigation (a scenario none of the three decisions squarely addresses).
Until a consensus emerges, employers should consider erring on the side of caution by assuming that GenAI interactions may be discoverable, involving counsel early, and building a record that supports privilege at every step.
Questions?
If you have questions about how these rulings may affect your organization’s use of GenAI tools, your litigation strategy, or your employment practices, please contact your Mintz client service team.


