25 Feb 2025

An AI Liability Regulation would complete the EU’s AI strategy

1

In its 2025 work programme, the European Commission effectively scrapped the AI Liability Directive (AILD) – a move that threatens to unravel trust in the EU’s burgeoning AI policy landscape. This abrupt decision strips away potential critical protections for victims of AI-related harm and denies businesses the legal certainty they need to innovate.

While the Commission touts a ‘Bolder, Simpler, Faster Union’, abandoning the AILD risks undercutting Europe’s competitive edge and leaving a gaping hole in its AI legal framework. In doing so, the Commission is undermining its own goal of fostering an ecosystem of trust and promoting AI made in the EU.

Why AI is challenging existing liability rules

Today’s advanced AI technologies differ significantly from traditional software systems. Their complexity, opacity and potential for unpredictable outcomes make it prohibitively difficult for companies and victims alike to pinpoint where things went wrong, let alone assign responsibility. If an AI-powered hiring tool discriminates against job applicants, an automated medical diagnosis system makes a fatal error or a generative AI tool defames an individual, who should be held accountable? Neither the recently updated Product Liability Directive (PLD) nor existing national liability regimes can satisfactorily resolve these issues.

The Commission’s 2022 AILD proposal was an attempt to provide a robust, harmonised framework but fell short of the mark. It took an overly cautious approach by merely complementing national regimes and overlooked critical new challenges, such as the rise of generative AI and the complexities inherent in AI value chains. The EU must uphold its role as a trustworthy AI leader and close these gaps by fully completing its legal system on AI.

The myth that liability kills innovation

The most common pushback against even considering AI-related legislation is the claim that such rules would stifle innovation. This argument isn’t just misleading – it’s the exact opposite of the truth. Right now, an AI company must navigate 27 different national liability regimes, on top of 27 different procedural regimes. Each one offers victims various ways to claim damages (including immaterial harms) after alleged violations of legal obligations under the AI Act or other EU laws, which national authorities often interpret differently.

This patchwork creates a high level of legal uncertainty, disproportionately harming small and medium-sized enterprises (SMEs) and startups that lack the legal resources to handle the high costs stemming from these diverse liability regimes. In reality, only a few powerful US and Chinese tech giants can manage such complexity.

By contrast, a codified and harmonised AI liability framework would significantly reduce legal costs for all other market players, streamline and simplify compliance, and boost investment in European AI safety. Companies would finally know where they stand, and consumers would gain greater trust in the technology – paving the way for the broader adoption of European AI solutions. In fact, a 2020 study by the European Parliament Research Service estimated that introducing new AI liability rules could add up to EUR 498.3 billion in value to the EU economy, once broader impacts – like the reduction of accidents – are taken into account.

What needs to be done

To trigger these positive changes, the EU’s AI Liability regime must consist of four distinct features.

First, it needs to be codified in the form of a Regulation, following the general EU trend of shifting from Directives to Regulations, as has happened in other policy areas such as product safety (i.e. the Machinery Regulation) or market regulation (i.e. the Digital Services Act). Only an AI Liability Regulation (AILR) will allow for full harmonisation across Member States, eliminating the legal fragmentation that would lead to a litigation minefield for AI firms and uncertain compensation rights for claimants.

Second, it’s important that EU institutions prevent a situation where victims of AI-related harm must engage in costly, years-long litigation just to prove that an AI system caused the damage. Provisions on reversing the burden of proof must be duly introduced, and be made coherent with the AI Act. Thus, the new AILR should make sure that the victim faces fewer burdens if the harmful AI system was categorised as being a higher risk. Prohibited AI systems (covered in Article 5 of the AI Act) should always trigger strict liability.

Third, we need to establish a joint liability regime. It shouldn’t only be EU downstream providers or AI system deployers that face litigation if something goes wrong. As much AI-related harm originates at the model development stage, where major tech firms train foundation models that can later be fine-tuned for different purposes, it must be ensured that liability can be traced up the AI value chain – this includes model developers and component suppliers. This would also prevent developers from abusing their market dominance by forcing customers to waive their right to recourse.

Fourth, we need to extend the law’s scope by identifying areas that aren’t covered by the updated PLD or by national liability regimes to establish a genuine ecosystem of trust, making sure that any AI-related harm will be compensated. This would span from professionally used property or IP rights over large-scale disinformation to personality rights or discrimination against an individual. Victims – whether companies or citizens – should not be the ones suffering from legal uncertainty or fragmentation caused by the EU’s digital policy instruments.

The stakes are high for Europe

The EU has positioned itself as the global leader in regulating AI but actual leadership means getting the details right. If the co-legislators fail to address liability rules for AI, the EU risks a future where victims of AI-related harm face daunting legal and technical barriers to seeking justice.

At the same time, downstream EU companies will be forced to navigate a fragmented and unpredictable legal landscape, allowing the dominant Big Tech firms – with their vast legal resources – to gain an even greater edge. Consequently, AI innovation could shift away from the EU toward jurisdictions with fewer regulatory hurdles and more transparent liability frameworks, threatening Europe’s competitiveness in the global AI market.

While it’s true that the EU’s competitiveness has suffered from incoherent and overlapping regulations in the past, rules on AI liability are not about saddling AI companies with more red tape. Rather, they’re about holding accountable those who develop, sell or deploy harmful technology. EU legislators must rise to the challenge and strike the right balance, creating a trustworthy AI ecosystem that also fosters innovation – ultimately protecting both consumers and EU companies.

This CEPS Expert Commentary is part of a special series being published prior to the CEPS Ideas Lab on 3-4 March 2025 to showcase some of the most innovative ideas we’ll be rigorously debating with our participants. More info can be found on the official Ideas Lab 2025 website.