Your Gate to Europe
  • HOME
  • OUR PRODUCTS
  • EU-POLICIES
  • EU-INSIDE
  • ABOUT US
  • MEMBER LOGIN

Brussels,

THE NEW FRONTIERS

EU Artificial intelligence Act

​The Artificial Intelligence Act (AI Act) is the European Union's comprehensive legal framework on AI. It aims to address the risks associated with AI systems while positioning Europe as a global leader in trustworthy AI.

On 21 April 2021, the European Commission released a Proposal for a Regulation Laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts.

In November 2023, EU Council and European Parliament agreed on a compromise text.

On 13 March 2024, the European Parliament voted on the compromise text.

On 12 July 2024, the Regulation was published and enterd into force on 1st August 2024

Key Objectives

  • Ensure Safe and Trustworthy AI Systems
    • Guarantee that AI systems placed on the EU market and used within the Union are safe and respect existing laws on fundamental rights and Union values.
    • Protect fundamental rights by ensuring that AI technologies do not lead to discrimination or unfair practices, safeguarding the safety and rights of people and businesses.
  • Provide Legal Certainty and Facilitate Innovation
    • Ensure legal certainty to facilitate investment and innovation in AI.
    • Support innovation by reducing administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs).
  • Enhance Governance and Effective Enforcement
    • Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems.
    • Implement clear requirements and obligations for AI developers and users regarding specific uses of AI, improving risk management.
  • Facilitate the Development of a Single Market for AI
    • Facilitate the development of a single market for lawful, safe, and trustworthy AI applications across the EU.
    • Prevent market fragmentation by ensuring consistent regulations and standards for AI systems within the Union.
Picture
Click to read
Picture
Click to read

Why did the EU decide to establish rules on AI?

While AI systems offer significant benefits and can help solve societal challenges, the EU considers that certain applications pose risks that need to be managed to prevent undesirable outcomes, such as discrimination or unfair practices. Existing legislation is insufficient to address the specific challenges AI systems may bring

The EU Rules on AI

The AI Act adopts a risk-based framework, categorizing AI systems into four levels:

  1. Unacceptable Risk: AI practices that are banned due to posing clear threats to safety, livelihoods, and rights (e.g., social scoring by governments).
  2. High Risk: AI systems that require strict obligations before market deployment. These include AI used in:
    • Critical infrastructures (e.g., transport systems).
    • Education and vocational training (e.g., exam scoring).
    • Safety components of products (e.g., robot-assisted surgery).
    • Employment and worker management (e.g., CV-sorting software).
    • Essential services (e.g., credit scoring for loans).
    • Law enforcement (e.g., evaluation of evidence reliability).
    • Migration and border control (e.g., visa application assessments).
    • Administration of justice (e.g., AI tools for legal decisions).
  3. Limited Risk: AI systems with specific transparency obligations to ensure users are informed (e.g., chatbots, deepfakes).
  4. Minimal or No Risk: The majority of AI systems, such as spam filters or AI-enabled video games, which are allowed free use.

Requirements for High-Risk AI Systems

Providers of high-risk AI systems must ensure:

  • Risk Assessment and Mitigation: Implement adequate measures to identify and mitigate risks.
  • High-Quality Datasets: Use datasets that minimize risks and discriminatory outcomes.
  • Traceability: Maintain logs to ensure the traceability of results.
  • Detailed Documentation: Provide necessary information for authorities to assess compliance.
  • User Information: Offer clear and adequate information to users.
  • Human Oversight: Include measures for appropriate human oversight.
  • Robustness and Security: Ensure high levels of accuracy, robustness, and cybersecurity.

Note: Remote biometric identification systems in public spaces for law enforcement are considered high-risk and are subject to strict regulations, with narrow exceptions under specific conditions.

Prohibited AI Systems under the AI Act

The AI Act prohibits certain AI systems due to their potential to cause significant harm or violate fundamental rights.

The following types of AI practices are prohibited:

  1. Subliminal Manipulation: AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behavior in a way that causes or is likely to cause physical or psychological harm.
  2. Exploitation of Vulnerabilities: AI systems that exploit vulnerabilities of specific groups due to age, physical or mental disability, or socio-economic status, to materially distort a person's behavior, causing or likely causing harm.
  3. Biometric Categorization Based on Sensitive Attributes: AI systems that use biometric data to infer sensitive attributes such as race, political opinions, trade union membership, religion, sex life, or sexual orientation. Exceptions exist for labeling or filtering of lawfully obtained biometric data or when law enforcement categorizes biometric data under strict conditions.
  4. Social Scoring by Public Authorities: AI systems used by public authorities for social scoring, leading to detrimental or unfavorable treatment of individuals or groups in social contexts unrelated to the original data collection, violating fundamental rights.
  5. Predictive Policing Based Solely on Profiling: AI systems that assess the risk of individuals committing offenses based solely on profiling, without objective and verifiable evidence directly related to criminal activity. Exceptions apply when used to augment human assessments based on objective facts.
  6. Unlawful Facial Recognition Databases: AI systems that compile facial recognition databases through indiscriminate scraping of images from the internet or closed-circuit television (CCTV) footage without consent.
  7. Emotion Recognition in Certain Contexts: AI systems that infer emotions of individuals in workplaces or educational institutions, except when necessary for medical or safety purposes.
  8. 'Real-Time' Remote Biometric Identification (RBI) in Public Spaces: The use of AI systems for real-time remote biometric identification in publicly accessible spaces by law enforcement is prohibited, except in specific circumstances such as:
    • Searching for Missing Persons: Locating missing children, abduction victims, or individuals who have been trafficked or sexually exploited.
    • Preventing Imminent Threats: Preventing a specific and substantial threat to life or safety, or a foreseeable terrorist attack.
    • Identifying Suspects of Serious Crimes: Detecting, locating, identifying, or prosecuting perpetrators or suspects of serious criminal offenses (e.g., murder, rape, terrorism, human trafficking, organized crime).

Notes on Remote Biometric Identification (RBI):

  • Strict Conditions for Use: Even in exceptional cases, the deployment of AI-enabled real-time RBI systems must be necessary and proportionate, considering the rights and freedoms of the affected individuals.
  • Fundamental Rights Impact Assessment: Before deployment, law enforcement agencies must conduct a fundamental rights impact assessment to evaluate potential risks and implications.
  • Registration Requirements: The AI system must be registered in the EU database of high-risk AI systems. In duly justified cases of urgency, deployment can commence without prior registration, provided that registration occurs without undue delay afterward.
  • Authorization Protocol: Prior to deployment, authorization must be obtained from a judicial authority or an independent administrative authority. In urgent situations, deployment can begin without prior authorization if it is requested within 24 hours. If authorization is subsequently denied, the use must cease immediately, and all data, results, and outputs must be deleted.

By prohibiting these AI systems, the AI Act aims to protect individuals' fundamental rights, prevent discriminatory practices, and ensure that AI technologies are used responsibly within the European Union.

General-Purpose AI Models

The AI Act introduces transparency obligations for all general-purpose AI models to enable better understanding and additional risk management obligations for highly capable and impactful models. These obligations include:

  • Systemic Risk Mitigation: Self-assessment and mitigation strategies for systemic risks.
  • Incident Reporting: Reporting serious incidents and malfunctions.
  • Testing and Evaluation: Conducting tests and evaluations of AI models.
  • Cybersecurity Requirements: Ensuring models meet cybersecurity standards.

Future-Proof Legislation

The regulation is designed to adapt to technological changes, ensuring AI applications remain trustworthy after market deployment through ongoing quality control and risk management by providers.

Enforcement and Implementation

The European AI Office oversees the AI Act's enforcement and implementation in collaboration with Member States. It aims to:

  • Create an environment where AI technologies respect human dignity, rights, and trust.
  • Foster collaboration, innovation, and research in AI among stakeholders.
  • Engage in international dialogue and cooperation on AI governance.
  • Position Europe as a leader in the ethical and sustainable development of AI technologies.

eEuropa Blog - Read our articles
Sources: European Union, http://www.europa.eu/, 1995-2025, 

Picture
Picture
Picture
Picture
Picture
Picture
Picture
eEuropa Belgium
​Avenue Louise, 367
​1050 Brussels
BELGIUM

Bld. Franck Pilatte, 19 bis
06300 Nice
FRANCE
YONO HOUSE 9-1 KAMIOCHIAI, SAITAMA-SHI, SAITAMA-KEN
〒 ​338-0001 JAPAN

Via S. Veniero 6
20148 Milano
​ITALY

Help & Support
Legal notice
Terms & Conditions
Privacy Policy
© 2025, eEuropa Belgium
  • HOME
  • OUR PRODUCTS
  • EU-POLICIES
  • EU-INSIDE
  • ABOUT US
  • MEMBER LOGIN