The European Parliament has enacted the world’s first binding legislation on Artificial Intelligence. This groundbreaking development seeks to balance the ethical use of AI with its vast potential for societal advancement.

With a decisive vote tally of 523 in favor, 46 against, and 49 abstentions, this groundbreaking legislation marks a pivotal moment in the regulation of AI technologies, aiming to strike a delicate balance between innovation and ethical governance.

This Image was created with the assistance of DALL·E

A Unanimous Call for Ethical AI

Crafted after comprehensive negotiations with member states that concluded in December 2023, the Artificial Intelligence Act is a reflection of Europe’s commitment to leading the ethical charge in AI development and use. It establishes a nuanced framework, categorizing AI systems based on their potential risk and impact, thereby delineating a clear boundary between permissible innovations and those deemed hazardous for public deployment.

Banning High-Risk AI Applications

At the heart of the legislation is the protection of citizens’ rights, with an outright ban on AI applications that compromise these liberties. This includes technologies capable of:

  • Biometric categorization based on sensitive traits
  • Indiscriminate scraping of facial images for recognition databases
  • Emotion recognition in work and educational settings
  • Social scoring, predictive policing based on profiling
  • AI designed to manipulate or exploit human vulnerabilities

Law Enforcement Under the Microscope

The legislation also carves out a nuanced exception for law enforcement agencies, allowing the use of real-time biometric identification systems in strictly defined and controlled scenarios, such as the search for missing persons or the prevention of terrorist activities. However, such deployments are contingent upon obtaining prior judicial or administrative authorization, underlining the act’s commitment to safeguarding personal freedoms even in the pursuit of security.

Comprehensive Obligations for High-Risk Systems

Addressing the broader spectrum of high-risk AI applications, the Artificial Intelligence Act mandates rigorous obligations for systems deployed in critical infrastructure, healthcare, law enforcement, and other sensitive domains. These obligations encompass risk assessment, transparency, accuracy maintenance, human oversight, and the provision for citizens to lodge complaints and receive explanations for AI-driven decisions impacting their lives.

Transparency and Innovation: Two Sides of the Same Coin

In addition to stringent regulations for high-risk AI, the act introduces transparency requirements for general-purpose AI systems, including compliance with EU copyright laws and the publication of training data summaries. This move towards openness is paralleled by initiatives to foster innovation, such as the establishment of regulatory sandboxes and real-world testing environments aimed at supporting small and medium-sized enterprises (SMEs) and startups in developing AI technologies responsibly.

Voices from the Parliament Floor

Brando Benifei, co-rapporteur of the Internal Market Committee, articulated the monumental achievement of establishing “the world’s first binding law on artificial intelligence.” He highlighted the legislation’s objectives to “reduce risks, create opportunities, combat discrimination, and enhance transparency.” Benifei praised the European Parliament’s role in outlawing unacceptable AI practices, safeguarding the rights of workers and citizens, and emphasized the forthcoming establishment of the AI Office to assist companies in complying with the new regulations, ensuring that human beings and European values remain at the core of AI development.

Dragos Tudorache, co-rapporteur of the Civil Liberties Committee, reflected on the broader implications of the AI Act, stating, “The EU has delivered.” He linked AI’s development to the fundamental values that underpin society and noted the extensive work ahead in rethinking the social contract, educational models, labor markets, and military strategies in light of AI advancements. Tudorache viewed the AI Act as a foundational piece for a new governance model centered around technology, stressing the importance of implementing the law in practice.

Next Steps

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practices, which will apply six months after the entry into force date; codes of practice (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).