The European Union (EU) Artificial Intelligence Act is a key landmark legislation that represents one of the first laws to go into effect regarding the application and use of artificial intelligence (AI) technology. This historic regulatory framework was created to govern the use, development, and deployment of AI systems within the EU and establish an operational cyber framework for businesses.
As one of the first pieces of legislation of its kind, the EU AI Act aims to ensure that AI technologies are developed and used in a safe and ethical manner that protects the fundamental rights and freedoms of individuals. It also introduces a comprehensive set of rules that apply to all AI systems that can have an impact on people in the EU.
This blog will explore why the EU AI Act is significant, the key provisions of the act, and what is next to come regarding the use of AI and other AI tools.
Find out how UpGuard safely utilizes AI to help companies manage their risks >
The EU Artificial Intelligence Act was first proposed by the European Commission in April 2021, stemming from an initiative to ensure AI technology was deployed safely across all uses and functions. Following the rapid adoption of generative AI and large language models (LLM) like ChatGPT and Bard, the act addresses growing concerns around privacy and accountability in AI applications.
World leaders from the US, UK, China, and more, all strongly advocated for AI regulation, and even the CEO of ChatGPT, Sam Altman, believed that regulation was necessary. It’s important to note that while AI regulation is at the forefront of discussions, the goal is to ensure that AI technology can be managed properly but not restricted in order to encourage further AI innovation.
In March 2024, policymakers from 27 countries in the European Parliament voted heavily in favor of the AI Act, and it is expected to take effect within six months after the Act enters the lawbooks. By 2026, the European Commission expects the AI Act will be in full effect with complete regulations and laws.
The EU Artificial Intelligence Act is significant because it is the first-ever comprehensive legal framework on AI, and it has the potential to set the global standard for AI regulation. As AI technology becomes increasingly integrated into every aspect of daily life, from healthcare and education to employment and law enforcement, the potential cybersecurity risks associated with the technology increase tenfold.
These include issues of data privacy, security, fairness, and accountability. The main goal of the EU AI Act is to address these challenges, establishing a framework that balances AI innovation with protecting individual rights. By doing so, we can limit the risks and prevent abuse of such technologies.
At its core, the potential for AI is endless, which is why the EU AI Act aims to ensure the safety of its use, which ultimately protects the safety of the people, the market, and the world. The Act is structured in a future-proof manner, adopting an approach that categorizes AI by its impact, risk level, and scope.
The EU AI Act will use a risk-based approach to defining its framework. By using this approach along with other established rules, the EU can build trust in the use of AI technology, drive AI innovation, and protect individuals from exploitation. It is a completely new method of identifying and mitigating risks, one that existing legislations, such as the EU General Data Protection Regulation (GDPR) and Digital Operational Resilience Act (DORA), could not fully address.
The main provisions that are listed in the EU AI Act are as follows:
The risk-based approach to regulating AI enables a flexible response to the rapid evolution of AI technologies. By categorizing AI systems based on their level of risk, the AI Act ensures that stricter controls are applied to applications considered a higher risk, while lower-risk applications are subject to less stringent requirements.
The Act classifies AI risk into four different categories:
AI systems considered an unacceptable risk are those that clearly threaten people's safety, livelihoods, and fundamental rights. Systems classified as unacceptable risks are fully banned and restricted by the EU. Examples of unacceptable systems per the political agreement by the EU include:
There are exceptions to this rule, particularly in cases of law enforcement, in which functions such as remote biometric identification or facial recognition using AI technology can be allowed. However, any such use must be approved first and reported on following its use. The use of AI systems to identify criminals or criminal activity comes with a strict set of rules for persons suspected of committing serious incidents or crimes, including but not limited to sex trafficking, sexual exploitation, abductions, terrorism, murder, robbery, and more.
High-risk AI systems apply to sectors such as critical infrastructure, education, employment, essential private and public services, law enforcement, immigration, and the justice system. High-risk systems are subject to strict obligations before they can be placed on the market, including:
These systems are subject to strict compliance requirements, including accurate data management, transparency, robustness, and security measures. High-risk applications must also complete a conformity assessment before deployment to ensure they meet the AI Act’s standards.
As such, one major provision detailed in the Act is that AI developers are allowed to test and train their AI models before going to market. This allows developers to conduct real-world testing within the regulatory sandboxes to ensure their systems adhere to the high-risk system standards.
AI systems classified as limited risk must adhere to specific transparency obligations. More specifically, the system must inform users when they are interacting with an AI system. An example would include the use of chatbots, so users are aware of their interaction with AI before they make any decisions or continue the interaction. Additionally, AI content must be made clear so as not to mislead the public of its origin. AI content also extends to audio and video productions, such as deepfakes that may attempt to trick users.
Most AI systems fall into the category of minimal or no risk. Minimal risk systems can operate with few regulatory constraints, reflecting the EU's intent to encourage innovation and AI advancement. Although these systems are subject to few regulations, developers are encouraged and recommended to adhere to best practices and ethical standards to ensure trustworthiness and user safety.
The scope of the EU AI Act applies to all companies operating within the EU or dealing with data from EU citizens. It applies to AI systems regardless of where the provider is based as long as the system is used in the EU. This wide-reaching scope means that companies outside the EU must also comply if their systems impact EU citizens.
The main exceptions to the EU AI Act are AI systems used specifically for military or defence purposes or systems used strictly for research and scientific study.
As of March 2024, the EU Artificial Intelligence Act was successfully voted on and approved by the EU Parliament. The text is expected to be finalised and entered into lawbooks by April 2024, with its enforcement taking effect beginning six months after the fact.
The ban on systems identified as unacceptable risk will take effect first, six months after entry into force, roughly around October 2024.
GPAI systems have 12 months to abide by the requirements of the Act or 24 months if the product is already on the market.
The EU expects that the EU AI Act will be fully in effect and enforceable by April 2026, pending any new revisions or amendments that may come at a future date.
The European AI Office, which was established in February 2024 under the European Commission, is in charge of enforcing the EU AI Act. Additional enforcement of the Act will be carried out by authorities designated by each member state.
The Act also details a plan for oversight and enforcement across the EU, including establishing a European Artificial Intelligence Board. The European AI Board will facilitate cooperation between member states and ensure a unified approach to the Act's application and enforcement across the EU.
Non-compliance with the EU Artificial Intelligence Act can result in significant financial penalties, with the possibility of legal action if necessary. Fines for violations of the EU AI Act will depend on the type of AI system, the size of the company, and the severity of the violation:
General-purpose AI (GPAI) systems were a major point of contention during the EU Artificial Intelligence Act deliberations. The Act defines GPAI systems as "AI models with the ability to perform a wide range of tasks without being specifically designed for that task." However, without proper management, GPAI can pose a significant risks, such as making autonomous decisions beyond human comprehension, disregarding ethics and moral values, and more.
The Act recognizes the potential of GPAI, as well as the risks it brings, given its broad applicability and potential impact on society. As such, GPAI systems are classified as “systemic risk,” especially if it has a high-risk impact with computational power exceeding 10^25 FLOPS (floating point operations).
In order for GPAI systems to be placed on the market, GPAI providers must notify the EU AI Office as soon as they meet the following requirements:
To regulate GPAI, the EU Parliament and Council proposed a framework that can adapt to the rapid development of GPAI. This includes monitoring the evolution of GPAI systems, assessing their impact, and classifying them according to the risk-based approach if they are used in high-risk applications. The Act aims to ensure that as GPAI systems become more integrated into society, they do so in a way that protects EU citizens while promoting innovation at the same time.
Achieving compliance with the EU Artificial Intelligence Act requires companies to take a proactive and thorough approach to understand and implement the necessary measures based on the classification of their AI systems. By taking the following steps, companies can begin to take the first step towards implementing safe, ethical AI usage: