Skip to content
Subscribe

The European Union’s Artificial Intelligence Act

Bola Ogbara
Bola Ogbara Connect on LinkedIn
2 min. read

The European Union has finalized its AI Act, prioritizing safety and fundamental rights. The act sets a global precedent for responsible AI development and may inspire other countries to consider AI legislation sooner.

Eu’s Artificial Intelligence Act (1)

Legislation around the safety of Artificial Intelligence has become a priority across the world. The United Kingdom’s National Cyber Security Centre (NCSC) recently assessed the threat that AI poses to cybersecurity, saying that AI  “will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years”. The US and the UK have collaborated on a set of guidelines for secure AI system development. Now, the European Union has finalized language for its long-awaited  AI act. 

 

The EU started discussions on artificial intelligence systems in late October 2020, before proposing an AI act in 2021 to “harmonise rules on artificial intelligence” and “improve trust in artificial intelligence and foster the development of and update of AI technology.” By December 2023, both the EU Council and Parliament reached a provisional agreement to an AI act that, in addition to the earlier goals, also ensures that AI systems “are safe and respect fundamental rights and EU values.” The EU shared a press release explaining the contents of the draft regulation that received a final compromise text on February 2, 2024. 

 

Although there has been global interest in AI regulation, the EU’s AI Act is one of the first of its kind. Carme Artigas, the Spanish secretary of state for digitalization and artificial intelligence, called the legislation “a historical achievement, and a huge milestone towards the future”. The main concept behind the act is taking risk into account in AI use, so “the higher the risk, the stricter the rules.” People who deploy high-risk AI systems will have to perform a fundamental rights impact assessment, and meet increased transparency standards. Some users of these high-risk systems will also have to register in an EU database. 

 

The regulation doesn’t provide an exhaustive list of high-risk systems, but it does outline the criteria for programs to be classified as such. For example, an AI system would be high risk if it is part of the safety component of a product, poses any potential risk to health or fundamental rights, or if it is involved in the management and operation of critical infrastructure (which echoes some of the concerns in CISA’s 2023 AI Roadmap).  Even if an AI system is low-risk, there will still be regulations with lighter transparency obligations. 

 

The press release also outlines some prohibited uses of AI - like cognitive behavioral manipulation, untargeted scraping of facial images from public databases, emotion recognition in educational institutions and the workplace, social scoring, biometric categorization to predict sensitive data like sexual orientation or religious beliefs, and some cases of predictive policing. There are some exceptions to law enforcement's use of AI, but there has been a mechanism added to the law to keep fundamental rights like to be protected from abuse by AI. 

 

The regulation will also create an AI Office in the European Commission to enforce the rules for general-purpose models and classify AI models with systemic risks. Non-compliance with the AI Act may result in large fines, with a penalty being set at “35 million euros or 7% annual turnover” for using prohibited AI practices. The AI Act won’t be enforced until 2026, 24 months and 20 days after publication. 

 

The EU's AI Act marks a watershed moment in regulating these powerful tools. By prioritizing safety, respecting fundamental rights, and taking a risk-based approach, the Act sets a global precedent for responsible AI development. Though the act won’t go into effect for another two years, it may inspire other countries to consider pushing for legislation on AI use much sooner.