Brussels, |
|
Europe To Launch A.I. Act
Today, the European Parliament voted in favor of the AI Act negotiated with the EU Council. The EU is attempting to define the contours of artificial intelligence development, concerned that unchecked growth could harm not only its citizens but also certain socio-economic balances and the very foundations of democracy. But how can it ensure that this does not happen? Has it already promoted a global reflection to create a regulated common space in the world?
The European Union Council and Parliament have reached a consensus on the European Artificial Intelligence Act. Following this significant achievement, the European Parliament has officially voted in favor of the agreement, marking a critical step forward in the legislative process. The next and final phase lies with the Council of the European Union, which is expected to give its formal approval. This approval by the Council is the last procedural step required for the Act to become law, signifying a pivotal moment in the EU's approach to regulating artificial intelligence technologies. Once the Council endorses the agreement, the AI Act will be poised for implementation, setting a comprehensive regulatory framework aimed at governing the development and use of artificial intelligence within the EU.
Today, the European Parliament voted on the text on the Artificial Intelligence Act agreed with EU Council last December, setting the stage for the world's first comprehensive framework aimed at ensuring the responsible development and application of AI technologies.
This historic legislation aims to safeguard fundamental rights and democracy while fostering innovation and competitiveness within the EU's digital market.
However, the EU legislative initiative has elicited mixed reactions from the tech sector, politicians, and rights groups. Key concerns revolve around stifling competition and innovation due to the perceived over-regulation. France and Germany have notably cautioned against stringent controls that could disadvantage European AI startups in the global market, highlighting the competitive edge countries like the US and China might retain.
The Act categorizes AI applications into four risk levels, applying the strictest regulations to high-risk and prohibited AI, including foundational models like OpenAI's ChatGPT.
Critics, including the Computer & Communications Industry Association and France Digitale, argue that the Act's "stringent obligations" could hinder innovation, lead to a talent exodus, and impose burdensome processes on startups. They advocate for a focus on regulating the use of technology rather than the technology itself, fearing the current approach equates to "regulating mathematics."
Moreover, the Act's copyright rules and the requirement for AI developers to provide detailed documentation have been both praised for promoting transparency and criticized for potentially stifling the development of generative AI and foundation models. Concerns also extend to the Act's compatibility with existing GDPR standards and the challenges of applying it to general-purpose AI systems without stifling innovation.
As the EU AI Act's final text is still under negotiation, with potential changes looming due to upcoming European Parliament elections, the tech sector remains apprehensive about the Act's long-term impact on Europe's competitive stance in AI innovation. Critics call for a balanced approach that fosters innovation while ensuring responsible AI development.
Once formally adopted by the EU Council, all eyes will be on the delegated acts and implementing acts that the European Commission will write and adopt independently, subject to any reservations that might be expressed by the European Parliament and the Council.
In particular, the Commission will be able to add or remove AI systems considered high-risk from the Annex that prohibits them. When an AI system is deemed high-risk by the EU, the provider of the system will have to modify it or immediately withdraw it from the European market. The sanctions devised by the EU are quite strong, but could become harsher over time.
All this leads to believe that the major AI players are already on alert and are monitoring every move of the European institutions. Meanwhile, the European Commission will be renewed, and new European Commissioners will replace the outgoing ones. There will certainly be a more or less extended period of adjustment and review of open files, and in the meantime, the world of AI will make giant strides, faster than the regulatory texts that the Commission will manage to approve.
The absence of a coordinated global framework risks creating a fragmented regulatory landscape, where differing standards could hinder international cooperation and innovation. It could also lead to "regulatory arbitrage," where companies might relocate their AI operations to countries with more lenient regulations, potentially undermining efforts to ensure ethical and safe AI development.
Recognizing these challenges, various international agencies and organizations, including the United Nations and the OECD, have initiated discussions on ethical AI guidelines and frameworks for cooperation. These efforts aim to harmonize AI governance globally, ensuring that AI development benefits humanity while mitigating its risks.
In conclusion, while the European Artificial Intelligence Act is a commendable step towards regulating AI, the global nature of technology necessitates a broader, coordinated approach. Establishing a common regulatory framework at the international level would help manage AI's risks more effectively, ensuring that its development supports socio-economic equity and reinforces the foundations of democracy worldwide. As AI continues to evolve, the need for global collaboration becomes ever more critical to harness its benefits while safeguarding against its challenges.
Let's see the content of the agreement voted on today by the European Parliament and ready to be confirmed by the EU Council in the coming months.
1. Key Provisions of the Artificial Intelligence Act
Banned Applications
The legislation identifies and bans specific AI applications that pose significant risks to citizens' rights and democratic values. These include:
Law Enforcement Exemptions
While the act sets stringent rules, it allows for narrow law enforcement exceptions under strict conditions, including prior judicial authorization for the use of real-time biometric identification in public spaces for serious crimes or imminent terrorist threats.
Obligations for High-Risk Systems
AI systems deemed high-risk must undergo a fundamental rights impact assessment. This category includes systems with significant implications for health, safety, and fundamental rights, as well as those used in critical sectors like banking and insurance, or in influencing elections.
General Purpose AI Systems
General-purpose AI systems must adhere to transparency requirements, including detailed documentation and summaries of training data. High-impact models must evaluate and mitigate systemic risks and report on serious incidents and their energy efficiency.
2. Support for Innovation and SMEs
The act promotes regulatory sandboxes and real-world testing, facilitated by national authorities, to help small and medium-sized enterprises (SMEs) and innovators develop and refine AI technologies before market launch. This approach is designed to level the playing field, allowing smaller entities to compete with industry giants while fostering a vibrant ecosystem for AI innovation.
3. Sanctions for Non-compliance
To ensure adherence to the new regulations, the act prescribes substantial fines for violations. Penalties range from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the nature of the infringement and the size of the company involved. This tiered approach underscores the seriousness with which the EU views compliance with AI regulations.
What next?
Before becoming EU law, the agreed text of the Artificial Intelligence Act must be formally adopted by both the European Parliament and the Council. After the positive vote expressed by the European Parliament, a positive final vote is expected by the EU Council next weeks.
Stay informed about the AI Act and subsequent developments
© Copyright eEuropa Belgium 2020-2024
Source: © European Union, 1995-2024
1. Key Provisions of the Artificial Intelligence Act
Banned Applications
The legislation identifies and bans specific AI applications that pose significant risks to citizens' rights and democratic values. These include:
- Biometric categorization systems that profile individuals based on sensitive characteristics.
- The indiscriminate scraping of facial images for recognition databases.
- Emotion recognition in workplaces and schools.
- Social scoring systems.
- AI that manipulates human behavior or exploits vulnerabilities.
Law Enforcement Exemptions
While the act sets stringent rules, it allows for narrow law enforcement exceptions under strict conditions, including prior judicial authorization for the use of real-time biometric identification in public spaces for serious crimes or imminent terrorist threats.
Obligations for High-Risk Systems
AI systems deemed high-risk must undergo a fundamental rights impact assessment. This category includes systems with significant implications for health, safety, and fundamental rights, as well as those used in critical sectors like banking and insurance, or in influencing elections.
General Purpose AI Systems
General-purpose AI systems must adhere to transparency requirements, including detailed documentation and summaries of training data. High-impact models must evaluate and mitigate systemic risks and report on serious incidents and their energy efficiency.
2. Support for Innovation and SMEs
The act promotes regulatory sandboxes and real-world testing, facilitated by national authorities, to help small and medium-sized enterprises (SMEs) and innovators develop and refine AI technologies before market launch. This approach is designed to level the playing field, allowing smaller entities to compete with industry giants while fostering a vibrant ecosystem for AI innovation.
3. Sanctions for Non-compliance
To ensure adherence to the new regulations, the act prescribes substantial fines for violations. Penalties range from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the nature of the infringement and the size of the company involved. This tiered approach underscores the seriousness with which the EU views compliance with AI regulations.
What next?
Before becoming EU law, the agreed text of the Artificial Intelligence Act must be formally adopted by both the European Parliament and the Council. After the positive vote expressed by the European Parliament, a positive final vote is expected by the EU Council next weeks.
Stay informed about the AI Act and subsequent developments
© Copyright eEuropa Belgium 2020-2024
Source: © European Union, 1995-2024
Go to the European A.I. Act Page (free access)
© Copyright eEuropa Belgium 2020-2023
Source: © European Union, 1995-2023
© Copyright eEuropa Belgium 2020-2023
Source: © European Union, 1995-2023