Why AI Exclusions Are Appearing in Cyber and E&O Policies

Key Takeaways

  • Cyber and professional indemnity wordings were not drafted with autonomous AI in mind. Insurers are now clarifying that in writing.
  • The first AI exclusion clauses appeared in London market wordings in 2024 and are now spreading to European retail lines.
  • The exclusion language typically references artificial intelligence broadly and can remove cover for generative and agent-based activity alike.
  • A renewal that includes an AI exclusion should be read with the inventory of AI workloads in the business in front of the risk manager.
  • Dedicated AI agent liability products are the corrective instrument. They do not overlap with cyber or E&O, they address a separate class of loss.

One of the clearer signals that the European insurance market is about to reposition around autonomous AI is not what insurers are starting to sell. It is what they are starting to refuse to cover. Cyber and professional indemnity wordings across the London market and the European retail lines are being amended in 2025 and 2026 to exclude losses arising from the development, training, deployment, or operation of artificial intelligence systems. The exclusion language is new, the scope is broad, and the consequences for risk managers are immediate.

This piece explains why the exclusions are appearing, what they actually say, how to read a renewal that includes one, and what the corrective instrument looks like.

Why the exclusions exist

The short answer is that the underwriting models behind cyber and professional indemnity policies were not built for autonomous AI. Cyber cover assumes that harm is caused by an unauthorised third party breaking into a system. The insured perimeter is the network boundary. Professional indemnity cover assumes that harm is caused by the negligent act of a professional. The insured person is a named practitioner or firm. Technology errors and omissions cover assumes that harm is caused by software supplied to a customer and that the insured relationship is a contract.

An AI agent, operating inside the insured's own business and taking action on its own authority, fits none of those mental models. The insurer cannot readily decide whether a loss was caused by a cyber event, a professional act, or a software defect, because the agent is all three and none. Without a clear cause-of-loss framework, an insurer cannot price the risk. Without pricing, an insurer has two options. The first is to withdraw from the risk silently by loading premium. The second is to address the gap explicitly by excluding it. In 2025, the second path became the common one.

What the exclusion clauses actually say

There is no single AI exclusion wording. Clauses range from a single sentence to half a page. The most common structures look like this.

This policy does not cover any claim or loss arising directly or indirectly from, or in connection with, the development, training, deployment, operation, or output of any artificial intelligence system, including any generative AI model, agent, or automated decision-making system.

Some wordings reference the EU AI Act directly and use its definition of an AI system. Some reference the OECD definition. Some draft their own working definition that covers only generative AI. Some name particular classes of agent such as automated trading systems, autonomous procurement agents, or customer-facing chat agents. A few include carve-backs for legacy rules-based automation so that existing process automation is preserved.

The practical question for a buyer is whether the exclusion reaches everything the business is already doing with AI, or only part of it. The answer almost always depends on two definitions. The first is the definition of an AI system in the policy. The second is the definition of the insured event or activity that was previously covered. The intersection of those two definitions is the space where the exclusion bites.

The most common exclusion patterns in 2026

The broad carve-out

The simplest pattern is a broad carve-out that removes any claim arising from AI activity in the business. It is fast to draft and easy for an underwriter to defend. It is also blunt. A broad carve-out will usually remove cover for losses that the buyer would have expected under the old wording, such as a data leakage event where the intermediate cause was an agent but the ultimate cause looks like a classical cyber breach. Disputes between insured and insurer over where the loss attaches are predictable.

The output exclusion

A narrower pattern focuses on the output of an AI system. It removes cover for losses arising from reliance on, publication of, or action taken in response to AI-generated content. It leaves in place cover for classical events that happen around the AI system, such as a ransomware attack on the infrastructure that runs the agent. It is more surgical but it requires the claims team to distinguish output from infrastructure at the moment of loss, which is not always possible.

The autonomous action exclusion

A still narrower pattern targets autonomous action specifically. It excludes losses arising from transactions, decisions, or commitments executed by an AI agent without human approval. This pattern is closer to the new AI agent liability products in its mental model. It recognises that autonomous action is the genuinely new risk and tries to push it out of cyber and E&O so that it can be priced as its own class.

The activity-based exclusion

Some insurers, particularly in the London market, have drafted exclusions that are tied to specific agent activities. Automated credit decisioning, automated trading, automated contract formation, automated medical triage. Each of these is listed and excluded. The wording is longer and harder to draft but it is more predictable in claims, because the question during a loss is whether the specific excluded activity was at play.

How to read a renewal that includes an AI exclusion

The first step is to request the exact exclusion wording from the broker and to read it alongside the internal inventory of AI workloads in the business. If there is no internal inventory, the exclusion should be a prompt to build one. It is difficult to argue the scope of an AI exclusion with an underwriter if the buyer cannot describe what the business is doing with AI in the first place.

The second step is to ask the broker three questions in writing. What was covered under the prior wording that is not covered under the new wording. Which specific business activities are moved outside cover by the exclusion. What premium reduction, if any, is associated with the exclusion. The answers should be on file before the renewal is signed.

The third step is to map the excluded activities against the available replacements. A dedicated AI agent liability policy is one route. A specialist hallucination loss product is another. A first-party autonomous action cover is a third. None of these will be perfect in 2026 because the market is still forming, but the alternative of carrying the exposure uninsured is worse.

Why a dedicated AI agent liability policy is the corrective step

The new class of AI agent liability cover, drafted from the AIUC-1 reference standard and the Munich Re aiSure and Armilla wordings, does not try to stretch existing cyber or professional indemnity language. It treats AI agent activity as a separate class of risk with its own perils. Hallucination-driven financial loss. Data leakage by an agent. Intellectual property infringement in agent output. Regulatory penalty indemnity where permitted. Autonomous action liability when an agent executes a decision within its authorised scope.

Those categories are the subject of a companion article, What AI Agent Insurance Will Actually Cover. Read together with this piece, the picture is that the exclusions in cyber and professional indemnity are not an attempt to abandon the risk. They are the first step of a repositioning. Old wordings are being cleaned up so that new wordings can be written on their own terms.

What European operators should do now

The enforcement of the EU AI Act on 2 August 2026 and the application of the revised Product Liability Directive on 9 December 2026 are both feeding the urgency. An operator that is already running AI agents in production and is walking into a 2026 renewal should expect at least one exclusion clause to be added to at least one of its policies. The right response is to register for the pre-launch coverage queue, to document the agents in scope, and to begin a certification file in parallel so that by the time underwriters are writing cover in Q3, the organisation is ready to submit rather than ready to start.

The Agentic Liability Monitor tracks new exclusion wordings as they are filed with supervisors or circulated in the London market. The agentliability.eu sister property addresses the underlying liability regime. Together they form the reading stack for any European buyer preparing for the autumn.

Frequently Asked Questions

Why are insurers adding AI exclusions to cyber and E&O policies?

Because the risk profile of autonomous AI activity does not match the assumptions that underpin cyber and professional indemnity wordings. Insurers are clarifying the perimeter of existing policies so that AI-specific products can be priced on their own terms.

What does an AI exclusion clause usually say?

A typical exclusion carves out loss or liability arising directly or indirectly from the development, training, deployment, or operation of an artificial intelligence system. Some wordings reference the EU AI Act definitions, others refer to generative AI specifically, and some name particular classes of agent.

How should a buyer read a renewal that includes an AI exclusion?

Read the exclusion against the definition of artificial intelligence in the policy and against the inventory of AI workloads in the business. If the exclusion removes cover for losses the business is already carrying, a dedicated AI agent liability policy is the corrective step.

Does an AI exclusion remove cover for classical cyber events?

Usually not, but the drafting matters. A broad carve-out may inadvertently capture losses that are better described as classical cyber events where an agent is merely incidental. Buyers should insist on clarification in writing and, where possible, narrow the language.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council (the Artificial Intelligence Act), article 3 on definitions.
  2. Directive (EU) 2024/2853 on liability for defective products, repealing Directive 85/374/EEC.
  3. AIUC-1, the first published AI insurance standard, AI Underwriting Company, 2025.
  4. Munich Re aiSure product documentation, 2024 to 2025.
  5. Armilla AI policy form, version 2.
  6. Lloyd's Market Association model exclusion bulletins, 2024 to 2026.
  7. OECD definition of an AI system, revised 2023.