Coverage Frameworks · 4 May 2026

PI, cyber, or dedicated AI liability: a coverage decision guide for European enterprises.

Most organisations deploying AI agents in Europe have professional indemnity and cyber insurance. Most assume that coverage. The assumption is often wrong. This guide maps the coverage triggers, the gaps, and the decision framework that legal and compliance teams need to determine which policy structure is appropriate for each AI deployment.

Key takeaways

  • Professional indemnity responds to losses from professional advice. Cyber responds to losses from security events and data breaches. Neither responds reliably to the core AI agent risk: autonomous consequential actions that cause financial harm without involving advice or a breach.
  • The gap between PI and cyber is the primary exposure for agentic AI. Dedicated AI liability policies from Armilla, Counterpart, and Munich Re aiSure are designed for exactly this gap.
  • AI exclusions are being added to existing PI and cyber policies at renewal. Organisations that have not reviewed their policy endorsements since 2024 may have lost coverage they believed they had.
  • The decision framework has three questions: does the agent output function as professional advice, does the agent loss involve a security event, and does the agent take consequential autonomous actions? The answers determine which policy should respond.
  • Coordination between three policies (PI, cyber, and dedicated AI) requires a written coverage coordination structure that specifies which policy responds first and how defence costs are shared.

Why the existing coverage architecture is insufficient

Professional indemnity and cyber insurance were both developed for a world where losses arose from identifiable human decisions: a professional giving negligent advice, or a criminal compromising a computer system. The policies reflect the causation models of their era. Neither was designed for a loss that arises from an AI agent autonomously making a decision, taking an action, and causing harm to a third party with no human decision in the chain.

The gap is structural, not incidental. PI policies define covered professional services. Cyber policies define covered loss events. An AI agent operating outside those definitions, taking actions a human would not characterise as professional services and causing losses not connected to a security event, produces a loss that neither policy was designed to catch. The Lloyd's Market Association recognised this gap in its 2024 AI liability guidance, which prompted the LMA5566 model endorsement framework and the development of standalone AI coverage forms.

The situation is compounded by AI exclusions. Since 2023, major insurers have been adding AI-specific exclusions to both PI and cyber policies at renewal. These exclusions range from excluding losses arising from AI-generated outputs to excluding systems operating autonomously without human oversight. Organisations that renewed their policies in 2024 or 2025 without specifically reviewing the AI-related endorsements may find that coverage they believed they had has been removed.

Professional indemnity: when it responds and when it does not

A professional indemnity policy responds when a professional, defined by the policy as a person or firm providing specific professional services, makes an error or omission in the course of those services that causes a third party to suffer a financial loss. The coverage is tied to the character of the act: advice, analysis, design, or another professional service.

Where an AI agent is used as a tool in delivering professional services, and the AI's output forms part of the professional's work product that a client relies on, the PI coverage question turns on whether the output is treated as the professional's work. A law firm whose AI agent drafts a contract that contains a negligent error, which the supervising solicitor reviews and signs off on, may find PI coverage responds because the work product is attributable to the firm's professional services. A financial adviser whose AI agent generates investment recommendations that are reviewed by an authorised person before delivery is in similar territory.

The PI coverage position becomes difficult when the AI agent operates autonomously, without a human review step before the output reaches the client. In that scenario, the insurer may argue that the loss did not arise from professional services but from an automated system operating outside the scope of the professional services definition. Multiple PI coverage disputes following AI-related losses in the US and UK market in 2024 and 2025 have turned on precisely this question of whether autonomous AI output constitutes professional services.

The additional complication is that PI policies are renewing with AI exclusion clauses that exclude losses arising from systems that operate without adequate human oversight. For the EU AI Act obligations on human oversight under Article 26(2), which require deployers to assign competent oversight persons with authority to intervene, see the Article 14 human oversight analysis. An operator that meets the EU AI Act's human oversight standard is better positioned to resist an insurer's argument that the AI was operating without adequate oversight.

Cyber insurance: when it responds and when it does not

A cyber policy responds to losses arising from specific triggering events, which typically include: unauthorised access to or use of computer systems, data breach events, ransomware and extortion, network interruption, and the resulting first-party and third-party costs. The policy's design assumes a security event as the cause of loss.

Where an AI agent causes a data breach, including by processing personal data in an unauthorised way or by an agent being compromised and exfiltrating data, the cyber policy is the correct primary coverage and is likely to respond. Where an AI agent causes financial loss through erroneous actions that do not involve a security event, the cyber policy's triggers are not met and coverage will be denied or disputed.

The intersection between AI agent liability and cyber coverage is complicated by the prompt injection attack risk. A prompt injection attack involves a malicious actor embedding instructions in content that an AI agent processes, causing the agent to take actions it would not otherwise take. If a prompt injection attack causes an AI agent to transfer funds, modify records, or exfiltrate data, the resulting loss may be covered by the cyber policy's computer fraud or social engineering coverage, depending on the specific policy wording. But the analysis requires careful review of whether the agent's action constitutes a "computer crime" under the policy definition.

European cyber insurers are increasingly distinguishing between AI-related losses that originate from a cyber incident and those that originate from an AI error. The former is typically within scope. The latter is increasingly excluded or at best ambiguous. Organisations relying on cyber coverage for AI agent losses should obtain a specific written position from their insurer on both categories.

The gap: what neither policy covers

The primary uninsured exposure for an organisation running AI agents in 2026 is the autonomous consequential action loss: a loss caused by an agent taking an action within its authorised scope that nevertheless produces harmful results, without any element of professional advice or security breach. Examples include an AI agent that issues a refund exceeding its authorised limit due to a reasoning error; an agent that sends a communication to a customer that constitutes a binding contractual commitment the company did not intend; an agent that makes an employment-related recommendation that a manager acts on and that later gives rise to a discrimination claim.

These losses share a structure. The agent operated within its scope definition. No human made the specific decision that caused the loss. The loss is not a data breach. It is not professional advice. It is the consequence of an autonomous decision by a system that the deployer is responsible for.

The regulatory dimension makes the gap more serious. Under the revised Product Liability Directive (Directive (EU) 2024/2853), applicable from 9 December 2026, AI software causing harm may give rise to strict product liability claims without proof of fault. Under the EU AI Act, market surveillance authority investigations following an incident involving a high-risk AI system may produce regulatory costs even where the deployer was not negligent. Neither PI nor cyber is designed to cover these regulatory costs or strict liability exposures.

For a detailed analysis of how the Product Liability Directive and the EU AI Act create overlapping exposure, see the double exposure analysis on the sister site.

Dedicated AI liability: what it covers

The dedicated AI liability market is developing coverage categories specifically designed for the gap. Armilla, operating as a Lloyd's coverholder with capacity from Lloyd's syndicates, offers coverage that includes autonomous action liability, hallucination liability, and performance shortfall coverage. Munich Re aiSure uses a parametric structure with defined performance benchmarks and payout triggers. Counterpart's affirmative AI coverage covers first-party and third-party losses from AI errors including autonomous agent actions.

The categories that dedicated policies commonly address are: third-party financial loss arising from autonomous agent actions; costs of defending regulatory investigations under the EU AI Act, the revised Product Liability Directive, or sector regulations; costs of notifying affected persons following an AI-related harm event; and reputational recovery costs following an AI incident that attracts public attention. Some policies include IP infringement coverage for AI-generated outputs that infringe third-party copyright, addressing a category that PI and cyber typically exclude.

The AIUC-1 standard, published by the AI Underwriting Company, provides the most developed technical specification for what AI insurers evaluate in a submission. It covers twelve risk categories including confabulation, data privacy, human-AI configuration, and value chain traceability, and it maps these onto underwriting triggers. European insurers writing AI coverage have adopted elements of the AIUC-1 framework even where they do not formally require AIUC assessment as a condition of coverage.

The coverage decision framework

For a legal or compliance team mapping coverage against each AI agent deployment, the decision framework has four questions.

First: does the agent's output function as professional advice that a client relies on? If yes, the primary coverage structure is PI, supplemented with an explicit AI endorsement confirming that the AI output within the firm's supervised professional workflow is covered. If the agent produces autonomous outputs without human review before client delivery, the PI position is uncertain and should be confirmed with the insurer.

Second: does the agent's primary risk involve handling sensitive data, including personal data regulated under GDPR, financial data, or health data? If yes, the cyber policy should be evaluated for both data breach coverage and for whether the AI-specific exclusions in the current endorsements preserve or remove coverage for AI-driven data losses.

Third: does the agent take consequential autonomous actions that could cause financial harm to the deployer or to third parties independently of any professional advice or security event? If yes, dedicated AI liability coverage is necessary. Neither PI nor cyber is reliably positioned to respond to this risk.

Fourth: is the agent deployed in a regulated sector (financial services, healthcare, employment decisions, essential services) where regulatory investigation costs and penalty defence are a realistic exposure? If yes, confirm that at least one of the three policies explicitly covers regulatory defence costs under EU AI Act enforcement, EIOPA supervisory action, or sector regulator investigation. This coverage is not standard in PI or cyber and may need to be obtained in the dedicated AI policy or as a specific endorsement.

For the documentation that enables an insurer to evaluate each of these questions, see preparing an AI agent for underwriting review in Europe. For the certification pathway that strengthens the submission for all three coverage types, see agentcertified.eu.

Policy coordination

Operating three policies in parallel for AI agent risk creates coordination questions that should be resolved before an incident, not during one. The critical questions are: which policy responds first, how are defence costs allocated between policies, and how are losses that trigger multiple policies to be adjusted.

A written coverage coordination agreement is the appropriate mechanism. The agreement specifies a priority order for each category of AI agent loss, assigns primary and excess status to each policy for each loss category, and establishes a claims coordination process that prevents the deployer from being caught between competing insurers at the moment they need coverage most. The agreement should be reviewed annually as policy terms change and as the AI agent deployments evolve.

The market for AI agent liability coverage in Europe is developing quickly. The Agentic Liability Monitor on this site tracks active carrier positions, policy term developments, and regulatory movements that affect coverage. Register at the Monitor page for regular briefings as the market evolves. For pre-launch registration for coverage when the Agent Insured platform opens, see the waitlist.

Frequently Asked Questions

Does professional indemnity insurance cover AI agent errors?

Most PI policies written before 2024 do not explicitly address AI agent errors. Coverage depends on whether the AI output is characterised as professional advice. Autonomous AI outputs without human review steps are the most difficult to bring within PI coverage. Obtain a written coverage opinion from your broker for each deployment.

Are AI agent losses covered by cyber insurance?

Cyber policies respond to losses from security events and data breaches. AI agent errors causing financial loss without a security event typically fall outside the coverage triggers. AI exclusions are being added at renewal. Review current endorsements before assuming coverage.

What does dedicated AI liability insurance cover that PI and cyber do not?

Dedicated AI policies cover the gap: autonomous agent actions causing financial harm, hallucination liability, regulatory defence costs under EU AI Act enforcement, and performance shortfall coverage. Products from Armilla, Counterpart, and Munich Re aiSure address these categories specifically.

Should an organisation buy dedicated AI coverage if it already has PI and cyber?

For organisations running consequential autonomous AI agents, yes. The gap between PI and cyber is the primary exposure for agentic AI. The question is how to coordinate the three policies, which requires a written coverage coordination agreement reviewed annually.

References

  1. Lloyd's Market Association. LMA5566 and related AI model endorsements. 2024.
  2. AI Underwriting Company. AIUC-1 standard for AI liability underwriting. 2025.
  3. Munich Re aiSure product documentation and parametric coverage structure. 2025 edition.
  4. Armilla. AI policy form, Lloyd's coverholder framework. 2025.
  5. Counterpart. Affirmative AI coverage form. 2024.
  6. Directive (EU) 2024/2853 on liability for defective products (revised Product Liability Directive), applicable from 9 December 2026.
  7. Regulation (EU) 2024/1689 (EU AI Act), Articles 14, 26, 73, 99. OJ L, 12 July 2024.
  8. European Insurance and Occupational Pensions Authority (EIOPA). Opinion on AI governance in the insurance and occupational pensions sectors. August 2025.
  9. Regulation (EU) 2016/679 (GDPR), Articles 82 and 83.
  10. British Insurance Brokers' Association (BIBA). AI guidance for member brokers. 2024.