Authors
Under English law, legal privilege protects certain communications from being disclosed to third parties. There are two main types:
- Legal advice privilege: confidential communications between a lawyer and client for the purposes of giving or receiving legal advice.
- Litigation privilege: confidential communications with third parties, where the dominant purpose is obtaining information or advice for use in existing or ‘reasonably contemplated’ litigation.
Confidentiality is a crucial element for both forms of legal privilege. If confidentiality is lost, privilege cannot arise. This protection is critical in disputes, where an opposing party is not entitled to see privileged material, even if it is relevant.
If legal privilege is waived, which can be done by choice or by accident, that protection may be lost. The previously confidential communications are then ‘disclosable’ and can be requested by third parties, such as an opponent, a court or a regulator. It may then be relied on against you.
As AI becomes mainstream, the question arises as to whether privilege (in either form) attaches to advice generated by AI. Recent US and UK decisions have begun to clarify the position.
The US warning: US v Heppner[1]
In February 2026, the United States District Court for the Southern District of New York considered whether material created using a public AI platform (in this case, Claude) could attract “attorney-client privilege” or was protected by the “work product doctrine”.
The defendant had used Claude to generate reports setting out his defence strategy. He did so independently, without the involvement or direction of legal counsel. The claimant sought disclosure of relevant documents.
The Court rejected the defendant’s privilege claim, and while not binding in England and Wales, the Court’s findings provide important insight:
- Claude was not Heppner’s lawyer, lacking the “trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.”
- Heppner’s communications with Claude were not confidential. Claude’s terms allowed for data input and output to be retained, used for “training” purposes, and for disclosure to third parties, including “governmental regulatory authorities”.
- Only material and advice prepared by, or at the direction of qualified lawyers, is protected by legal advice privilege and Claude’s terms specifically disclaimed the provision of legal advice.
- The “work product” doctrine requires attorney involvement, and a client’s independent research, even where litigation is reasonably in anticipation, would not be protected absent direction by an attorney. While the “work product” doctrine is similar to litigation privilege in England and Wales, it does not mirror it exactly.
The position under English law
The first English authority to touch on privilege in the AI context came from the Upper Tribunal in Munir v Secretary of State for the Home Department,[2] a decision primarily concerned with lawyers’ reliance on AI “hallucinations”, but which also addressed the privilege risks associated with uploading confidential material to AI platforms.
The Tribunal observed that inputting client material into an open AI tool (such as ChatGPT) effectively places that information into the public domain. This breaches confidentiality and prevents privilege from arising. It added that any regulated professional who does so should notify their regulator and consider consulting the Information Commissioner’s Office.
The Tribunal also noted that closed-source AI platforms that do not make data public do not carry the same confidentiality risks.
Where the risk arises in practice
Using a public AI platform to draft, analyse or summarise information carries a significant risk to confidentiality. Once confidentiality is lost at the point of creation, privilege cannot later arise, even if the material is subsequently provided to lawyers or becomes part of a dispute process.
A closed-source AI platform may reduce that risk because data is not made publicly available. However, even where confidentiality is preserved, the issue remains that AI is not a lawyer. As seen in Heppner, documents created by individuals using AI tools without any involvement or direction from a lawyer are unlikely to attract either legal advice privilege or litigation privilege. While this is not binding in England and Wales, the reasoning aligns with the structure of legal privilege under English law.
This means that if AI is used to obtain “legal advice” in circumstances where litigation is not contemplated, the resulting material may be disclosable. Where litigation is contemplated, the position is not yet clear from the English courts but, based on Heppner, privilege may also not apply to any material unless a lawyer has directed that it is to be produced by the client.
We are also increasingly receiving draft documents that clients have produced using AI platforms. While later communications with us are privileged, the original draft may not be, particularly if generated on a public AI platform. In those circumstances, privilege may not be available over the original draft, regardless of how it is later used.
What you can do now
To minimise the risk of losing confidentiality or legal privilege when using AI in your business, consider taking the following steps:
- As a minimum, use secure, closed-source AI platforms to avoid any waiver of confidentiality, which is a key pillar of legal privilege.
- Where the purpose is legal advice, involve your lawyer directly.
- Put clear guidance in place for employees on when AI tools can be used, what types of information can be entered, and when to escalate to legal departments.
- If a document has been produced on a public platform, assume the initial draft may not be privileged. Discuss with your legal team how to handle earlier versions before circulating them more widely.
Conclusion: weigh up the risks before using AI
AI offers enormous benefits in speed and efficiency, but those advantages must be weighed carefully against the risks that AI-generated material could later become disclosable. That assessment needs to take place before entering a prompt or uploading a document, especially where the AI platform is public. It is not a one-off decision; it needs to be repeated every time the use of an AI tool is considered.
We are already seeing disputes over the existence of AI-generated documents and requests for searches of AI platforms alongside traditional sources of disclosure. This is likely to become routine, just as disclosure expanded from paper files to emails, and then text messages to WhatsApp. The trend is moving in one direction. For that reason, ensuring that teams understand the privilege and confidentiality risks associated with open-source AI tools should now be a priority.
If you would like to know more, or want advice in respect of the issues raised in this article, please contact Abigail Brown and Charlotte Bolton in our Commercial & Regulatory Disputes team.
[1] 1_25-cr-503-27-memorandum.pdf
[2] Munir, R (On the Application Of) v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC) (17 November 2025)
Print article