ec.europa.eu (Evropská komise)
European Union  |  December 09, 2023 00:08:00, updated

The European AI Act is here


Statement by Commissioner Breton

Historic!

With the political deal on the AI Act sealed tonight, the EU becomes the first continent to set clear rules for the use of AI.

The final negotiation round—known as a "trilogue" in Brussels jargon—between the EU Parliament, EU Council, and EU Commission, spanned no less than 37 hours !

This ultramarathon session demonstrates the vibrancy of our democracy and the commitment of EU leaders to find the right balance in the general European interest.

The AI Act is much more than a rulebook—it's a launchpad for EU startups and researchers to lead the global race for trustworthy AI.

Balancing user safety and innovation for startups, while respecting fundamental rights and European values, was no easy feat. But we managed.

When I joined the European Commission in 2019, I embarked on a mission to organise our “information space” and to invest in our technological leadership – including in AI.

“Too complicated, will take too much time, it's anti-innovation, let developers self-regulate…” A number of companies, backed by non-EU countries, tried to discourage us. They knew that the first to establish rules has a first-mover advantage in setting the global standard. This made the legislative process particularly complex – but not impossible.

Over these past four years, we engaged in extensive consultations and a democratic process to make AI regulation a reality, with the European Parliament and the 27 Member States.

During this time, technology and its applications evolved rapidly. So did our approach. For instance, large general-purpose AI models gained prominence, and business-to-consumer applications, non-existent in 2019, rapidly gained a vast user base. Our democratic bodies adjusted the legal proposal to these changes, always aiming to balance safety with innovation.

This culminated in today's final, successful trilogue.

 

Highlights of the trilogue

  • Large AI Models (e.g. GPT4)

With today's agreement, we are the first to establish a binding but balanced framework for large AI models (“general-purpose AI models”), promoting innovation along the AI value chain.

We agreed on a two-tier approach, with transparency requirements for all general-purpose AI models and stronger requirements for powerful models with systemic impacts across our EU Single Market.

For these systemic models, we developed an effective and agile system to assess and tackle their systemic risks.

During the trilogue we carefully calibrated this approach, in order to avoid excessive burden, while still ensuring that developers share important information with downstream AI providers (including many SMEs). And we aligned on clear definitions that give legal certainty to model developers.

 

  • Protecting fundamental rights

We spent a lot of time on finding the right balance between making the most of AI potential to support law enforcement while protecting our citizens' fundamental rights. We do not want any mass surveillance in Europe.

My approach is always to regulate as little as possible, as much as needed. Which is why I promoted a proportionate risk-based approach . This makes the EU AI Act unique – it allows us to ban AI uses that violate fundamental rights and EU values, set clear rules for high-risk use cases, and promote innovation without barriers for all low-risk use cases.

During the trilogue, we defined the specifics of this risk-based approach. In particular, we agreed on a set of well-balanced and well-calibrated bans, such as real-time facial recognition, with a small number of well-defined exemptions and safeguards.

We also defined varioushigh-risk use cases, such as certain uses of AI in law enforcement, workplace and education, where we see a particular risk for fundamental rights. And we ensured that the high-risk requirements are effective, proportionate and well-defined.

  • Innovation

We developed tools to promote innovation. Beyond the previously agreed regulatory sandboxes, we aligned on the possibility to test high-risk AI systems in real-world conditions (outside the lab, with the necessary safeguards).

We also agreed that future-proof technical standards are at the heart of the regulation.

This also includes certain environmental standards.

  • Enforcement

And finally, we agreed on a robust enforcement framework for the AI Act (distinguishing it from the many voluntary frameworks around the world).

It involves market surveillance at national level and a new EU AI Office, to be established in my services in the European Commission.

And it includes tough penalties for companies that do not comply with the new rules.

***

Europe has positioned itself as a pioneer, understanding the importance of its role as global standard-setter.

Now, we embark on a new journey.

That's why I refer to the AI Act as a "launchpad". It provides startups and researchers with the opportunity to flourish by ensuring legal certainty for their innovations.

This Act is not an end in itself; it's the beginning of a new era in responsible and innovative AI development – fueling growth and innovation for Europe.

Was this article: 10 | 8 | 6 | 4 | 2 | 0


Zobrazit sloupec