Brussels, |
|
European Companies Unite to Criticize EU's Artificial Intelligence Regulations
EP and Council held first trilogue in June 2023. The second in July.
Brussels, 3 July 2023 - 7 MINUTES READ
Some of the largest companies in Europe have taken collective actions to criticize the proposal for a artificial intelligence regulations by the European Union, arguing that the Artificial Intelligence Act is ineffective and could have a negative impact on competition. In an open letter sent to the European Parliament, the Commission, and the member states last Friday, over 150 executives from companies such as Renault, Heineken, Airbus, and Siemens have criticized the AI Act for its potential to "jeopardize Europe's competitiveness and technological sovereignty".
The Council had adopted its common position on the AI Act on 6 December 2022, adding some new provisions to increase transparency and to simplify the compliance framework for the AI Act. On June 14, 2023, the European Parliament established its negotiating position on the AI Act (Artificial Intelligence Legislation) with substantial amendments to the Commission's text. The Parliament's priority is to ensure that artificial intelligence systems used in the EU are safe, transparent, traceable, and non-discriminatory. In June the two legislators started negotiations and a second round will be held in July 2023.
The Council had adopted its common position on the AI Act on 6 December 2022, adding some new provisions to increase transparency and to simplify the compliance framework for the AI Act. On June 14, 2023, the European Parliament established its negotiating position on the AI Act (Artificial Intelligence Legislation) with substantial amendments to the Commission's text. The Parliament's priority is to ensure that artificial intelligence systems used in the EU are safe, transparent, traceable, and non-discriminatory. In June the two legislators started negotiations and a second round will be held in July 2023.
The EU approach
The EU AI Act adopts a risk-based approach, imposing restrictions based on the perceived level of risk associated with specific AI applications. It bans certain AI tools considered "unacceptable," such as analytics systems used by law enforcement to predict criminal behavior. Additionally, it introduces limits on "high-risk" technologies, including recommendation algorithms and tools that can influence elections. The legislation also focuses on generative AI, imposing obligations on companies to label AI-generated content and disclose the use of copyrighted data in training AI models.
EU says that its approach to AI is proactive and aims to solidify its position as a global leader in tech regulation. It adds to the suite of existing regulatory tools aimed at Silicon Valley companies and sets standards that could influence policymakers worldwide. The alignment between European and U.S. regulators has grown stronger in recent years, as both sides recognize the need to address the power of tech giants. European officials have engaged in discussions with U.S. lawmakers on AI, sensing a greater urgency in Congress to regulate AI technologies.
U.S. approach
In contrast to the EU's progress, the U.S. Congress is only beginning to grapple with the potential risks of AI. Senate Majority Leader Charles E. Schumer has initiated efforts to craft an AI framework, citing national security concerns and the need to prevent adversaries from taking the lead in AI regulation. However, Congress is still months away from considering specific legislation. The EU's more advanced AI legislation highlights the concern among some U.S. lawmakers that the United States is falling behind in setting regulatory standards for technology.
Tech giants worried about restrictive bias
The passage of the EU AI Act has significant implications for tech giants, including Google, whose advertising technology business was also targeted by a new antitrust challenge from Brussels on the same day. The legislation's impact is particularly concerning for OpenAI, the creator of ChatGPT, which has expressed the possibility of withdrawing from Europe due to the potential consequences of the regulations. While the European Parliament's approval is a crucial step, the bill still requires negotiations with the European Council before it becomes law. As European Parliament adopted 771 amendments to the Act proposed by the Commission, EU legislators told us that negotiations will take very long time.
While companies like Google, Microsoft, and OpenAI have voiced support for AI regulation, they have raised concerns about certain aspects of the EU's approach.
Google, for example, argues that disclosure requirements could compromise trade secrets and create security vulnerabilities. Despite these reservations, the passage of the EU AI Act marks a significant step forward in regulating AI within the European Union. However, it is important to note that no single law can fully address all the challenges and complexities associated with AI, and ongoing efforts will be necessary to develop comprehensive AI policies and frameworks.
While companies like Google, Microsoft, and OpenAI have voiced support for AI regulation, they have raised concerns about certain aspects of the EU's approach.
Google, for example, argues that disclosure requirements could compromise trade secrets and create security vulnerabilities. Despite these reservations, the passage of the EU AI Act marks a significant step forward in regulating AI within the European Union. However, it is important to note that no single law can fully address all the challenges and complexities associated with AI, and ongoing efforts will be necessary to develop comprehensive AI policies and frameworks.
© Copyright eEuropa Belgium 2020-2023
Source: © European Union, 1995-2023
Source: © European Union, 1995-2023