Besides opening up significant opportunities, the future use of AI is also going to cause harm. While the EU has made significant progress through the AI Act and the updated Product Liability Directive in adressing AI safety, a critical gap remains: the lack of a coherent, harmonised liability regime for damage caused by AI. With the withdrawal of the AI Liability Directive (AILD) in early 2025, Europe’s AI governance framework is left incomplete.
Despite regulatory efforts, individuals and companies across Europe may face harm from AI systems that are not classified as “high-risk” under the AI Act but are nevertheless capable of causing serious economic or reputational damage. The result is legal uncertainty, fragmentation, and inequity—particularly as national tort rules diverge widely in their approach to AI-related harm.
This Task Force aims to develop a balanced, EU-level approach to AI liability that minimises regulatory burden while maximising legal clarity and fairness. By prioritising harmonisation and focusing on cross-sectoral, procedural solutions, the initiative seeks to improve access to justice, reduce legal fragmentation, and ensure equal treatment of victims across Member States. The outcome will be a set of practical recommendations for EU-wide minimum standards that complement national regimes, uphold subsidiarity, and strengthen the overall coherence of Europe’s AI governance framework.
TOPICS
The Task Force will meet four times between June and October 2025 to develop a comprehensive and forward-looking set of recommendations on AI liability. Each session will be chaired and moderated by CEPS experts and focus on a key aspect of the emerging liability landscape.
- Liability in the Age of AI (Session 1: Setting the Stage)
The first session will address the foundational questions of liability in the AI era. Participants will examine the current EU legal framework and identify core gaps—particularly relating to forms of harm such as algorithmic discrimination, economic loss, or violations of personality rights.
- Designing Liability Standards and Judicial Guidance (Session 2)
The second session will explore how liability standards—fault-based, strict, or hybrid—could be tailored to different categories of AI systems. The discussion will also focus on the evolving role of courts in interpreting AI-related disputes, and whether non-binding EU-level guidance could help harmonise case law across Member States.
- Causation, Presumptions, and Access to Evidence (Session 3)
Third session will focus on procedural challenges: the difficulty victims face in proving causation and obtaining technical evidence from AI providers. With AI systems often functioning as “black boxes”, the discussion will examine:
- Liability Across the AI Value Chain (Session 4)
The final session will address the complex challenge of assigning liability across multiple actors involved in AI development and deployment—from original developers and deployers to fine-tuners and end-users.
TASK FORCE OUTPUT
The CEPS team will prepare several policy briefs and a Final Report based on the discussions and independent research. The final documents will aim to inform both policymakers and stakeholders, contributing to the broader debate on responsible AI governance in the European Union.
TASK FORCE LEAD
- Chair: Andrea Renda, Director of Research, CEPS
- Rapporteur: Artur Bogucki, Associate Researcher, CEPS
HOW TO BECOME A MEMBER?
For more information about the objectives and functioning of the Task Force, please refer to the brochure below. To express your interest, kindly complete the registration form and email it to [email protected].