Product liability in high-security sectors: AI-driven risks under German and EU law
High-security products now run on software, data and networked systems. That raises the stakes. A bad update, a flawed model, or a misconfigured integration can affect public safety and essential services.
Two key frameworks set the tone: the EU Artificial Intelligence Act and the revised EU Product Liability Directive (EU PLD). Germany's draft Product Liability Act (ProdHaftG-E) is intended to implement the EU PLD, with details still in flux. Product teams should plan for stricter duties across design, documentation and supplier control.
What product teams need to know first
- Two regimes, different goals: The EU AI Act is a compliance framework for AI systems. The EU PLD is a strict liability regime for defective products. You can comply with the AI Act and still face liability under the EU PLD.
- Software is a "product": The EU PLD covers software and digital manufacturing files. Updates, retraining and later integrations can turn a safe product into a defective one over time.
- Supply chain exposure: Substantial modifications (e.g., adding an AI module) create liability. Joint and several liability reaches across integrators, OEMs and key suppliers.
- Evidence and presumptions favor claimants: Courts can order disclosure of technical information. If disclosure falls short, defect is presumed. Further presumptions can ease proof of defect and causation in complex AI cases.
- Data damage counts: Liability includes destruction or corruption of non-professional data (e.g., corrupted employee or traveler records).
Defense and dual-use: where they sit
AI used exclusively for military, defense or national security is outside the EU AI Act. Dual-use and mixed-use systems are in scope. The EU PLD protects natural persons and certain property damage; governments and companies are not eligible claimants. Claims can still arise where civilians interact with the technology, including dual-use deployments.
EU AI Act: high-risk systems and what triggers them
High-risk AI covers systems used in critical infrastructure, essential services and other listed areas, as well as AI that is a safety component of products under certain EU harmonization laws requiring third-party conformity assessment.
Classification depends on intended purpose, not just technical design. High-risk systems must meet strict obligations across the lifecycle:
- Documented risk management and data governance
- Technical documentation, event logging and transparency
- Human oversight defined in procedures and tooling
- Security measures and update hygiene throughout the lifecycle
The Act applies extra-territorially if outputs are used in the EU. Obligations are phased in, with milestones over the next few years. See the regulation text for details: EU AI Act.
EU PLD: defectiveness now includes software, updates and cybersecurity
The revised EU PLD widens what counts as a "defect." Safety expectations now include compliance with product safety requirements, including security-by-design and security maintenance. Foreseeable use and foreseeable effects on other products (including via interconnection) are in scope.
Practical takeaways for product teams:
- Lifecycle liability: Updates, retraining and downstream integrations can introduce defects post-release. Treat each change like a safety-relevant event.
- Substantial modification matters: If you or your partners add components or features that materially change performance, you can become a liable "manufacturer."
- Disclosure risk: Courts can order logs, model details, training data descriptions and more. Gaps can trigger presumptions against you.
- Broader damages: Personal injury, property damage and certain data corruption are compensable. Immaterial losses may be compensable under national law (e.g., Germany).
Member States must implement the EU PLD by December 2026. Text here: Directive (EU) 2024/2853.
German angle: contract and tort exposure you should plan for
Contract liability (BGB Sections 280 et seq.)
- In B2B and B2G projects, detailed specs, integration duties and disclosure clauses set the bar. Once a breach is shown, fault is generally presumed unless you prove otherwise.
- "Fault" is about organizational control. AI cannot "be at fault," but your company can be, due to weak oversight, testing or compliance.
- You are responsible for employees and agents (including subcontractors and model providers). Indemnities help, but limits for intent and gross negligence apply. Product liability under the ProdHaftG cannot be waived.
Tort liability (BGB Sections 823, 831)
- General tort covers life, health, property and personality rights. Misclassification, autonomous asset mishaps or faulty updates can all trigger claims.
- Breaches of protective statutes (e.g., IT security rules, NIS 2, data protection, applicable AI Act duties) can support negligence findings and ease proof.
- For acts by auxiliaries, you can exonerate yourself only with evidence of proper selection, instruction and supervision.
Why lifecycle engineering is now a legal control
With AI-enabled systems, safety is not a "ship and forget" problem. Updates, retraining, dataset changes and third-party integrations all carry safety implications. That means you need traceability, change control and audit-ready documentation, not just at launch but continuously.
90-day playbook for product leaders
1) Map risk and scope
- List all AI features and safety-relevant software in each product. Capture intended purpose, users and contexts of use.
- Build or refine your SBOM and "model bill of materials" (pretrained models, datasets, libraries, SaaS APIs).
- Identify where your system could be high-risk under the AI Act and where the EU PLD could apply.
2) Engineer for safety and oversight
- Define human oversight points, fallback modes and safe degradation paths. Include a kill switch for critical functions.
- Run security scanning, model evaluation, bias/shift checks and red-teaming on a schedule. Treat retraining like a new release.
- Harden update processes: code signing, staged rollouts, rollback plans and monitoring for regressions.
3) Document like you will need to disclose
- Maintain a technical file: risk analysis, data governance notes, model cards, evaluation results, test coverage and residual risks.
- Log events and decisions that affect safety outcomes. Set retention schedules that match legal timelines.
- Prepare confidentiality strategies (protect trade secrets while meeting possible court orders).
4) Contract for supply-chain control
- Push down AI Act and security obligations to suppliers. Add audit rights, change-notice windows and approval gates for safety-relevant updates.
- Require disclosure of training data sources and third-party model licenses. Prohibit covert model swaps.
- Allocate liability and indemnities carefully; reflect joint-and-several risk; align insurance requirements.
5) Incident and claims readiness
- Set up cross-functional incident response: engineering, security, legal and comms. Rehearse.
- Preserve evidence by default (logs, model versions, datasets, deployment configs). Create a playbook for regulator interactions.
- Plan for EU PLD disclosure requests. Line up experts early.
Defense contracts and liability limits
Government contracts may cap liability for mission-critical systems. Those caps do not remove statutory exposure under product safety and liability laws. Suppliers can still face third-party claims, especially in mixed-use or civilian-facing deployments.
Key dates and practical next steps
- EU PLD implementation due by December 2026. Germany's ProdHaftG-E is intended to align with it; final details may shift.
- AI Act obligations phase in over the next years. Align your roadmap now so high-risk systems meet the bar before they go live.
- Start with design controls, supplier clauses and documentation. These three moves reduce most exposure.
Quick compliance starter checklist
- Define intended purpose and foreseeable use for each AI-enabled feature
- Establish human oversight, fallback and rollback procedures
- Create/update SBOM and model inventory; verify third-party licenses
- Institute gated change control for updates, retraining and integrations
- Stand up evaluation, security testing and red-teaming cadence
- Build an audit-ready technical file and evidence retention plan
- Update supplier contracts with pass-through duties and audit rights
- Review insurance (E&O, cyber) against new exposure types
If your team needs structured upskilling on AI assurance and product compliance, see practical courses here: Complete AI Training - courses by job.
Your membership also unlocks: