I have been – and remain – a great fan of the EU’s AI Act. Throughout the years, I have been impressed by the way EU institutions have handled a thorny subject, gradually incorporating inputs from various stakeholders and experts. Indeed, they have behaved like a ‘learning system’, just like many advanced AI models. They shied away from taking approaches that are too permissive, which would end up ignoring the risks of AI, as well as from over-regulatory stances that could have banned applications that most likely (if properly handled) would bring significant benefits to society.
Unlike other policymakers, they ignored the sirens of ‘effective altruism’ and ‘long-termism’ and placed the protection of fundamental rights and the use of AI ‘for good’ at the forefront of their mission. Kudos. Chapeau.
This took time and courage. Since the European Commission presented its first proposal in April 2021, the AI Act has evolved from being rather minimalistic to a more ambitious set of rules. The list of pre-selected prohibited and high-risk applications started to expand, awareness of the complexity of the AI value chain prompted new solutions and provisions, and the prospect of a powerful ‘AI Office’ emerged.
Sure, there have been disagreements among the co-legislators, but this is normal in democratic processes, and even more so within the EU’s complex decision-making process.
In a rapidly changing environment, the FOMO is real
However, when legislators must face a subject that rapidly evolves at a breath-taking pace, roadblocks and rabbit holes are always a certainty. One of them is the fear of missing out (or ‘FOMO’). FOMO happens when policymakers, terrified by the prospect of seeing years of work go down the drain due to last-minute developments, constantly rewrite their laws to incorporate new provisions. The worry that legal rules that end up being written in the sand leads them to develop a form of horror vacui – the ‘fear of empty space’.
Academics have studied this situation. A body of knowledge known as ‘adaptive regulation’ has emerged, focused on how to flexibly update legal rules over time, without having to fully rewrite the law. Typically, this requires a broad, principles- and outcomes-based legislative text, coupled with agile implementation rules and policy experimentation, and governed by a strong, expert-backed institution. This is important since, while the subject matter evolves quickly, the underlying principles and goals typically remain unchanged. It is rather the mode of implementation and compliance that requires constant attention.
The AI Act is a textbook example of a policy crafted in fast-changing circumstances. The Commission’s original version had some adaptive traits – the definition of AI was broad and technology-neutral, the lists of prohibited and high-risk applications were placed in annexes for easier updating, and there was a section on regulatory sandboxes.
Yet the text had two major problems – it didn’t consider the complexity of the AI value chain, and failed to create a strong, expert-backed institution with the authority to keep the regulation up to date. Co-legislators tried to solve both issues, and to their credit, they largely succeeded. A year ago, the AI Act was close to being ready, and highly praised as a trailblazer in global AI regulation.
Then came ChatGPT.
The emergence of powerful, versatile, foundational models that can be adapted to different purposes potentially frustrated the AI Act’s focus on ‘providers’, i.e. those entities that place the AI system on the market, or put it into service, for a specific purpose. By targeting entities that – often with little or no knowledge of the underlying AI model object – gain access to its functionalities to develop specific applications, the AI Act risked spectacularly missing the forest for the trees. Or maybe more accurately, it risked missing tech giants and focusing on smaller European firms, which have limited knowledge of the risks generated by the AI they rely on.
After the horse has bolted?
Faced with these developments, the European Parliament decided to reopen the text to add several new provisions (Before you misunderstand me – these additions responded to real needs and were competently drafted). This resulted in a significant delay, causing the EU to lose its lead in AI policy. Within a few weeks, China swiftly regulated generative AI systems and the US administration agreed voluntary commitments with the tech giants. Later, it released a massive Executive Order, which masterfully blends regulatory measures and soft law.
The UK and US also created new agencies for AI safety. Gradually, global policymakers’ attention veered away from (EU) regulation, paving the way for softer forms of intervention such as codes of conduct at the national and international level and almost-exclusive attention on ‘safety’, rather than the protection of fundamental rights or the wider sharing of AI benefits.
Could things have gone differently? I believe so.
The co-legislators could have resisted the temptation to solve all problems in one go. They could have entrusted the AI Office with adopting guidance/rules on cooperation along the value chain (especially when very capable foundation models are at play). They could also have introduced ‘sunrise clauses’, warning industry players that if problems of copyright infringement, market concentration and lack of information sharing with downstream players persisted, new regulation could be eventually introduced. They could also have reproposed already-existing provisions on voluntary codes of conduct (for limited or low-risk AI applications) for general-purpose AI systems, as Alex Engler and I proposed more than a year ago.
By exploring these options, they would have obtained at least three important results. They would have truly future-proofed the AI Act, paving the way for adaptive policies after its entry into force. They would have preserved and nurtured the EU’s primacy in AI policy, bolstering the Act’s ability to trigger the ‘Brussels effect’. And they would have avoided introducing brand new provisions during trilogue negotiations – when evidence-based, participatory policymaking becomes impossible (the AI Act’s original impact assessment certainly didn’t contemplate any of these additional provisions).
Nine months down the road, the trilogue is still open. The next meeting will be on 6 December. According to Belgian tradition, this is when Saint-Nicholas/Sinterklaas, having reached Belgium from Spain, brings gifts to well-behaved children.
One can only hope that the Spanish presidency of the Council will bring this gift to Brussels – the wisdom to finally wrap up the AI Act. Otherwise a good piece of legislation may end up morphing into its own worst enemy if it hangs around any longer.