AI agent claims triggers and coverage gaps: what activates a policy in 2026
Key Takeaways
- Five trigger categories define the emerging AI agent insurance class: hallucination loss, autonomous action liability, data leakage by agent operation, intellectual property infringement, and regulatory enforcement.
- Standard cyber and PI policies typically do not respond to AI agent incidents because the trigger model does not match: no breach occurred, no professional advised.
- Munich Re aiSure, Armilla, and AIUC-1 licensees each define triggers differently. Reading trigger definitions before submission is as important as reading the coverage grant.
- The autonomous-action-liability gap is the most common uninsured exposure for enterprise AI operators in 2026: it sits between cyber, PI, and tech E&O.
- Operators whose AI agent documentation does not define the scope of authorised action cannot obtain meaningful autonomous-action cover, because the insurer cannot determine what scope it would be triggered against.
Understanding what activates an AI agent insurance policy matters more than understanding what the policy covers in the abstract. Coverage is a contingent promise: it pays when a specific event occurs, measured against a specific wording, subject to specific exclusions. For AI agent liability, the trigger analysis is genuinely new ground for most risk managers. This article maps the five trigger categories across the major products in the European market, identifies where the gaps between existing cover and AI-specific cover sit, and explains how trigger design affects underwriting submissions.
What makes AI agent triggers different
Standard insurance responds to events within its trigger model. For cyber insurance, the trigger is almost universally an unauthorised access event: a threat actor breaches the perimeter, exfiltrates data, or deploys ransomware, and the policy responds. For professional indemnity, the trigger is professional negligence: a qualified person provides advice or service that falls below the required standard, causing a client loss. For technology errors and omissions cover, the trigger is a defect in software supplied to a customer, causing that customer quantifiable harm.
An autonomous AI agent fits none of these models cleanly. When an AI agent produces a false output used in an authorised workflow, no third party breached the perimeter. When it executes a transaction that causes a third-party loss, no professional gave negligent advice. When it generates output that infringes a copyright, the software did not fail in the sense that tech E&O contemplates: it produced exactly what it was designed to produce. The traditional trigger models attach to acts by humans or by external actors. AI agents are neither.
The Air Canada case provides an instructive boundary marker. In Moffatt v. Air Canada (2024 BCCRT 149), the Civil Resolution Tribunal of British Columbia held Air Canada liable for a misleading statement made by its AI chatbot about the carrier's bereavement fare policy. The question of which policy category the loss fell into was not the focus of the judgment, but the underlying facts illustrate the gap: the chatbot operated within Air Canada's own system, within its authorised scope, and produced an incorrect output that caused financial loss to a customer. A standard cyber policy would not respond: there was no breach. A PI policy would not respond: the chatbot is not a professional. A tech E&O policy for the software vendor would not respond: the software functioned as specified. The loss sat in a gap between the three.
The five AI-specific trigger categories that have emerged from the AIUC-1 standard, the Munich Re aiSure product schedules, Armilla's policy form version two, and the Lloyd's draft AI endorsement are designed to address that gap directly. They are: hallucination-driven output loss; autonomous action liability; data leakage caused by agent operation; intellectual property infringement in AI-generated output; and regulatory enforcement arising from the agent's operation.
Trigger category one: hallucination-driven output loss
The hallucination trigger covers the situation where an AI agent produces a factually incorrect, fabricated, or unsupported output, and a person or system acts on that output within an authorised workflow, causing quantifiable financial loss. The trigger has three components that all require satisfaction before a policy responds: the output must be verifiably wrong; it must have been used within a workflow the operator and insurer agreed was within scope; and the loss must be causally connected to the output, not to a subsequent decision made independently of it.
The "authorised workflow" requirement carries significant practical weight. Under Munich Re's aiSure Schedule B, the insured is required to define, at the time of policy inception, which workflow classes the agent operates within and what the expected review cadence is for each class. Schedule B will not respond to losses arising in workflow classes that were not declared at inception. This is not a technicality: it reflects the underwriter's need to price the exposure against a defined operational perimeter. An agent used across undeclared use cases is, from the policy's perspective, uninsured for those use cases even if the premium has been paid.
The "human-in-loop review requirement" exclusion operates alongside the authorised workflow requirement. Where the policy schedule specifies that human review is required before an output is actioned, and a loss arises because that review was bypassed, Schedule B excludes the loss. The insurer is not underwriting an agent that operates without the governance controls it was told were in place.
Armilla's approach to the hallucination trigger takes a slightly different form. Its version two policy form defines the trigger as an "AI output error event" and treats the authorised workflow requirement as a condition precedent to coverage rather than an exclusion. The practical effect is similar, but the claims handling process differs: under aiSure, the burden of showing the trigger applies rests with the insured; under Armilla's form, the insured must satisfy the condition precedent before coverage attaches at all. The distinction matters when preparing a submission, because the documentation required to demonstrate compliance differs between the two approaches.
Trigger category two: autonomous action liability
The autonomous action trigger is the most novel element of the emerging AI agent insurance class. It addresses the situation where an agent, operating within its authorised scope and without human approval, executes a transaction or decision that causes loss to a third party. The agent has not malfunctioned, no policy was violated, and no professional gave bad advice. The agent did what it was authorised to do and caused harm in the process.
Munich Re's aiSure Schedule D is the most detailed published treatment of this trigger in the current market. The Schedule D trigger activates when three conditions are met. First, the agent must have acted within its certified scope: the "autonomy envelope" document that the insured submits at inception, specifying the action classes the agent is authorised to take without human approval. Second, the action must have caused a quantifiable loss to the insured or to a third party to whom the insured owes a duty of care or a contractual obligation. Third, the loss must not have arisen from an action that fell outside the certified scope, was the result of deliberate misconfiguration by the operator, or was caused by a circumvented approval gate.
The concept of the autonomy envelope is central to how this trigger works in practice. The autonomy envelope is a written document, reviewed by the underwriter at inception, that defines the classes of action the agent is authorised to take autonomously, the financial limits within which those actions may be taken, the third parties with whom the agent may interact, and the circumstances under which the agent must escalate to human review. An agent that operates without a written autonomy envelope cannot obtain meaningful Schedule D cover, because the insurer cannot determine what scope it would be pricing against. This is not a documentation formality: it is the underwriting input that makes the trigger calculable.
In early 2026, Munich Re extended Schedule D's scope to cover procurement agents specifically, following a pattern of enterprise deployments where AI agents were issuing purchase orders within approved budgets but against incorrect suppliers or on incorrect terms. The extension addressed a gap that had become commercially significant: procurement automation is one of the highest-volume AI agent deployments in European enterprise, and the potential for within-scope, non-negligent errors causing third-party loss is correspondingly high.
For operators preparing a submission that includes autonomous action cover, the autonomy envelope document is the most important artefact. Before engaging a broker for the Q3 2026 window, the operator should have a written, current version of the autonomy envelope for each agent or agent class it intends to declare. Without it, the underwriter cannot assess the exposure and will either decline or apply significant loading to cover a scope it cannot measure.
Trigger categories three through five: data leakage, IP infringement, and regulatory enforcement
The remaining three trigger categories are, in different ways, adaptations of existing insurance concepts to the AI agent context. Each involves a distinction that is specific to AI agent operations and that practitioners from adjacent fields may initially underestimate.
Data leakage by agent operation. Standard cyber policies cover data leakage caused by an external attacker who gains unauthorised access to the insured's systems. They do not cover leakage caused by the insured's own AI agent. The distinction is not merely technical. An AI agent can leak personal data through prompt injection, where an adversarial instruction causes the agent to output information from its context window that should have remained internal. It can leak through training data exposure, where fine-tuned models reproduce sensitive content from their training set. It can leak through cross-tenant contamination in shared deployments where memory or context isolation fails. None of these are breaches in the cyber insurance sense: the perimeter was not crossed by an unauthorised party. The agent caused the leakage as a consequence of its own operation.
The GDPR's Article 82 creates liability for damage caused by a breach of the regulation, including damage caused by an AI agent's disclosure of personal data. That liability attaches to the controller and the processor, not to a third-party attacker. Armilla's version two policy form, and the AIUC-1 reference standard, treat agent-caused data leakage as a distinct trigger that sits outside the standard cyber trigger and responds to Article 82 claims and data subject compensation proceedings. The cover typically extends to regulatory investigations, notification costs, data subject claims, and forensic analysis of the agent's operation. It excludes leakage caused by a failure to apply a published security patch, and leakage that originates from a classical cyber event where the AI agent is the pathway rather than the cause.
Intellectual property infringement. The IP infringement trigger covers claims that AI-generated output infringes a copyright, trade mark, design right, or database right belonging to a third party. The primary exposure is for operators whose agents produce customer-facing content, code, technical documentation, or imagery. The trigger activates when the output can be shown to reproduce protected material to a degree sufficient to constitute infringement under the applicable national law. The policy does not require the infringement to be intentional: the agent's training data and generation patterns can produce infringing output without any deliberate act by the operator.
AIUC-1 addresses IP risk in section 6 of its published reference standard. The cover extends to defence costs, damages or settlement amounts, and mitigation expenses incurred to withdraw or modify the infringing output. Standard exclusions apply to deliberate reproduction of copyrighted material on explicit instruction from the operator, the use of models the operator knows to have been trained on unlicensed data, and outputs in sectors where the operator has an existing indemnification agreement with the model provider that addresses IP risk.
Regulatory enforcement. The regulatory enforcement trigger is the category most familiar in concept to European compliance professionals, though its application to AI agents involves material differences from the standard approach. The trigger covers administrative fines and penalties imposed under the AI Act (Regulation (EU) 2024/1689), the GDPR (Regulation (EU) 2016/679), and the revised Product Liability Directive (Directive (EU) 2024/2853), together with the legal costs of responding to supervisory authority proceedings. The AI Act creates fines of up to EUR 35 million or seven per cent of global turnover for the most serious violations. The GDPR provides for fines of up to EUR 20 million or four per cent of global turnover.
The admissibility of fine indemnity varies by member state. In several EU jurisdictions, insurers cannot indemnify administrative fines because doing so would undermine the deterrent effect of the penalty regime. Where fine indemnity is not admissible, the primary value of the regulatory enforcement trigger lies in defence cost coverage: the cost of legal representation, technical expert witnesses, regulatory response teams, and corrective action implementation. For most organisations facing a supervisory authority investigation, these costs will be substantial even where the proceeding ends without a fine.
The coverage gap anatomy
The three standard commercial policies that most European enterprises carry, cyber, professional indemnity, and technology errors and omissions, collectively leave the majority of the five AI trigger categories unaddressed. Understanding where the gaps sit is as important as understanding what AI agent cover provides.
Cyber insurance covers data leakage when caused by an external attacker. It does not cover agent-caused data leakage. It does not cover hallucination-driven output loss. It does not cover autonomous action liability. It does not cover IP infringement in generated output. It partially addresses regulatory enforcement, but typically only where the underlying event was a cyber breach rather than an AI Act or GDPR violation arising from the agent's operation. Net position: cyber covers one of the five AI trigger categories partially, and none of the others.
Professional indemnity covers errors and omissions by a professional in the performance of their services. An AI agent is not a professional. Where a human professional uses an AI agent as part of their advisory workflow and the agent's error contributes to a negligent output, there is an argument that PI could respond, but it attaches to the professional's conduct, not to the agent's operation as a standalone matter. Net position: PI may cover elements of hallucination-driven loss where a professional's workflow is involved, but does not address the other four categories.
Technology errors and omissions covers defects in software supplied to a customer. Where an AI agent is supplied to a customer and malfunctions in a way that causes the customer loss, tech E&O may respond. But the coverage requires the software to have failed: an agent that produces a hallucination is functioning as intended from a technical standpoint. An agent that takes an autonomous action within scope is not defective. An agent that generates infringing content is generating content in the way it was designed to. Net position: tech E&O partially addresses hallucination-driven loss where the product is sold to a customer, but does not address autonomous action liability, agent-caused data leakage, IP infringement from the operator's perspective, or regulatory enforcement arising from operator use.
The most significant gap is the autonomous-action-liability gap. This is the exposure where an AI agent, operating within its authorised scope, takes an action that causes loss to a third party, and none of the three standard policies respond. No breach occurred, so cyber does not apply. No professional advised, so PI does not apply. The software did not fail, so tech E&O does not apply. The loss is real, the causal chain is clear, and the operator is uninsured. This gap is specifically and intentionally addressed by the autonomous action trigger in the emerging AI agent liability class. It is the strongest argument for adding AI agent cover to an existing commercial programme, not as a replacement for any of the three standard policies, but as a fourth layer that addresses the exposures they collectively miss.
The hallucination gap is a close second. Standard tech E&O may partially address it for software vendors, but most enterprise operators are not selling AI agents: they are using them. For an operator using an AI agent in its own business processes, hallucination-driven loss sits in the gap between cyber, PI, and tech E&O in the same way autonomous action liability does. The recommendation for any organisation operating AI agents in production is to treat AI agent liability cover as an additive policy layer, not a replacement. The Q3 2026 market entry is positioned exactly this way by the insurers involved.
What this means for underwriting submission
Trigger analysis changes the content of an underwriting submission. The insurer does not simply want to know what the agent does: it wants to know, for each trigger category, whether and how the trigger could apply to the specific deployment, and what evidence exists that the governance conditions for coverage are met.
For the hallucination trigger, the submission should document: the workflow classes in which each agent operates, the review protocols in place for each class, the mechanism by which outputs are accepted into authorised workflows, and the testing or red-teaming evidence that supports the accuracy claims made for the model. The underwriter is pricing against the frequency and severity of output errors in defined workflow contexts. Without workflow-level documentation, the exposure cannot be priced and the submission will receive generic or conservative terms.
For the autonomous action trigger, the submission must include the autonomy envelope: a written, current specification of authorised action classes, financial limits, counterparty scope, and escalation conditions. The autonomy envelope is the primary underwriting input for Schedule D and its equivalents. It should be prepared before engaging a broker, not during the submission process. Operators who arrive at the underwriting review without a written autonomy envelope will be asked to produce one before terms are quoted, adding weeks to the process at a time when the market window for Q3 2026 is already constrained.
For the data leakage trigger, the submission should distinguish between the agent's data handling architecture and the organisation's wider cyber security posture. The underwriter wants to understand the specific mechanisms by which agent-caused leakage could occur, the isolation controls in place for multi-tenant deployments, and the incident response procedures specific to AI agent data events. A generic cyber security summary does not address the AI-specific leakage vectors that the policy is designed to cover.
For the IP infringement trigger, the submission should identify the content generation use cases by category, note the model provenance and any vendor indemnification arrangements in place, and describe the review process, if any, applied to generated content before customer delivery. Underwriters in this category are pricing against volume of generated content and the degree of human review applied before it enters customer-facing channels.
For the regulatory enforcement trigger, the submission should demonstrate AI Act compliance status: which AI Act risk categories apply to the agent's use case, what technical documentation has been prepared, and what conformity assessment has been conducted or is planned. For the GDPR component, the data protection impact assessments relevant to AI agent operations should be available. The underwriter is not expecting full regulatory compliance on day one of the Q3 2026 window, but it is pricing against the trajectory of the organisation's compliance programme and the likelihood of a supervisory action given the current state of documentation.
The overriding principle is that scope precision matters more than coverage breadth in the first year of AI agent cover. An operator that can define exactly what each agent does, within exactly what limits, with exactly what governance controls, will receive more useful and more competitively priced terms than an operator that requests broad cover across undefined agent deployments. The market is new. Underwriters are calibrating their models against the submissions they receive. Precise submissions produce precise policies. Vague submissions produce wide exclusions and significant premium loading to compensate for the uncertainty the underwriter is carrying on the operator's behalf.
The coverage framework article on this site sets out the five coverage categories in more detail. The coverage section contains the platform's current view of the market window. For submission preparation against the autonomy envelope dimension, the Agent Certified autonomy envelope certification article provides the technical specification that aligns with the underwriting requirements described here. For the underlying EU liability regime, the Product Liability Directive exposure analysis at agentliability.eu sets out the Article 9 rebuttable presumption mechanism that translates into civil claim triggers for third-party loss. The underwriting submission preparation guide on this platform covers the documentation requirements in full.
Frequently Asked Questions
What is a claims trigger in AI agent insurance?
A claims trigger is the event that causes a policy to respond. In standard cyber insurance, the trigger is typically a security event: unauthorised access, data exfiltration, or ransomware. In AI agent insurance, the trigger categories are broader and distinct: a hallucination producing a harmful output used in an authorised workflow; an autonomous action taken within scope that causes a third-party loss; a data leakage event caused by the agent's own operation rather than an external attack; an intellectual property infringement in AI-generated output; and a regulatory enforcement action arising from an AI Act or GDPR breach related to the agent's operation.
Why do standard cyber and PI policies often not respond to AI agent incidents?
Standard cyber policies were written around the unauthorised-access model: a third party breaches security, causes damage, the policy responds. An AI agent error is not an unauthorised access. No one breached the system. The agent did what it was allowed to do and caused harm in the process. Standard professional indemnity policies require a professional providing advice to be at fault. An autonomous agent is not a professional. The result is a coverage gap between cyber, PI, and tech E&O that affects most organisations running AI agents on standard commercial programmes.
What does Munich Re aiSure actually cover as a first trigger?
Munich Re's aiSure product addresses hallucination-driven financial loss as a primary trigger under Schedule B and autonomous action liability under Schedule D. The trigger for Schedule B is: a verifiable output error by the AI system used within an authorised workflow, causing quantifiable financial loss to the insured or a third party the insured has a duty to. The trigger for Schedule D is: a transaction or decision executed autonomously within the certified scope of the agent, causing quantifiable loss, where no human approval was obtained because the policy schedule specified that human approval was not required for that action class.
What is the most common coverage gap for enterprises in 2026?
The most common gap is the autonomous-action-liability gap: the situation where an AI agent, operating within its authorised scope, takes an action that causes loss to a third party, and neither the cyber policy (no breach occurred), the PI policy (no professional advised), nor the tech E&O policy (the software functioned as intended) responds. This gap is specifically addressed by the emerging AI agent liability class. Enterprises that have not yet added this cover are uninsured for a growing category of operational risk.
References
- AIUC-1 reference standard, AI Underwriting Company, 2025, trigger definitions and scope requirements.
- Munich Re aiSure product documentation, Schedules B and D, hallucination loss and autonomous action liability triggers, 2024 to 2025.
- Armilla, AI policy form, version 2, trigger language and authorised workflow definitions.
- Lloyd's Market Association, LMA5566, artificial intelligence exclusion clause, 2023.
- Moffatt v. Air Canada, 2024 BCCRT 149, Civil Resolution Tribunal of British Columbia, 21 February 2024.
- Regulation (EU) 2024/1689, Article 26(2) and (5), deployer obligations and incident reporting.
- Regulation (EU) 2016/679, Article 82, liability for GDPR damage as a trigger event.
- Directive (EU) 2024/2853, Article 9, rebuttable presumption of defect as a basis for third-party civil claims.