Brussels, |
|
EU legislators negotiate on Artificial Intelligence
EP and Council held first trilogue in June 2023. The second in July.
Brussels, 6 October 2023 - 7 MINUTES READ
In June 2023, the European Parliament (EP) drafted its position on A.I. european law, adding more restrictions to the rules of the Commission's draft Regulation. On 27 June, EP and Council started negotiations and a second round will be held in July 2023. In the meantime, some of the largest companies in Europe have taken collective actions to criticize the proposal for the european A.I. Regulations, arguing that the Artificial Intelligence Act is ineffective and could have a negative impact on competition. In an open letter sent to the European Parliament, the Commission, and the member states last Friday, over 150 executives from companies such as Renault, Heineken, Airbus, and Siemens have criticized the AI Act for its potential to "jeopardize Europe's competitiveness and technological sovereignty".
End of June, Council of the EU and European Parliament held first round of negotiations on the draft EU Regulations that will govern the use of A.I.
Both legislative institutions start with their own proposals: European Parliament has made several key amendments and provisions to the proposed regulations on AI systems in the EU. They aligned the definition of AI systems with that of the OECD and introduced definitions for "general purpose AI system" and "foundation model" in EU law. Parliament expanded the list of prohibited AI systems, including banning biometric identification systems (except for severe crimes and pre-judicial authorization), predictive policing systems, emotion recognition systems, and AI systems that scrape biometric data. They also established criteria for high-risk AI systems, requiring them to pose a "significant risk" and included AI systems used for political campaign influence and recommender systems as high-risk. General-purpose AI and generative AI models, such as ChatGPT, would be subject to transparency obligations and disclosure requirements. The governance and enforcement aspects involve strengthening national authorities, establishing an AI Office, and enhancing citizens' rights. To promote innovation, research activities and open-source AI components would be exempt from compliance with the AI regulations. The Council has adopted its common position on the Artificial Intelligence Act, aiming to ensure the safety and compliance of AI systems in the EU. The definition of AI systems has been refined, focusing on machine learning and logic-based approaches. Prohibited practices now include the use of AI for social scoring by private actors and the exploitation of vulnerabilities in social or economic situations. The classification of high-risk AI systems has been clarified, considering the severity of potential risks. Requirements for high-risk systems have been adjusted to be more feasible, and responsibilities of stakeholders in the AI value chain have been clarified. Provisions for general-purpose AI systems and their integration into high-risk systems have been added. Specificities regarding law enforcement authorities have been addressed, emphasizing the need to respect operational data confidentiality. The compliance framework, market surveillance, and the role of the AI Board have been simplified and strengthened. Penalties have been adjusted to be proportionate for SMEs and start-ups. Transparency measures have been enhanced, including registration in the EU database for high-risk AI systems and informing individuals about the use of emotion recognition systems. Complaint procedures and measures to support innovation, such as AI regulatory sandboxes and real-world testing, have been included. The aim is to create an innovation-friendly framework with evidence-based regulatory learning while supporting smaller companies through administrative assistance and limited derogations.The EU aims to finalize the AI Act by the end of the year. |
Tech giants worried about EU restrictive bias​
Many European companies have raised concerns about the broad scope of the law and the potential impact on trade secrets and international data transfers.Dozens of top business leaders in Europe and U.S. have expressed their concerns about the proposed legislation on artificial intelligence by the EU, in an open letter, where executives from companies such as Siemens, Carrefour, Renault, and Airbus warned that the EU AI Act could harm the bloc's competitiveness and drive away investment.
They argued that the draft rules are too restrictive, particularly regarding generative AI and foundation models. The executives called for a revision of the bill, emphasizing the need for a risk-based approach and a regulatory board of experts to ensure adaptability to evolving technology.
They also urged EU lawmakers to collaborate with their counterparts in the United States to establish a level playing field for AI regulation.
They argued that the draft rules are too restrictive, particularly regarding generative AI and foundation models. The executives called for a revision of the bill, emphasizing the need for a risk-based approach and a regulatory board of experts to ensure adaptability to evolving technology.
They also urged EU lawmakers to collaborate with their counterparts in the United States to establish a level playing field for AI regulation.
The economic impact of A.I.
AI will acquire a decisive weight in many fields of the economy, from transport, to retail, to marketing, to healthcare, etc.
If the scientific and industrial world is in fibrillation for the actions taken by different continents, it must be remembered that the entire world economy will be affected by artificial intelligence.
The AI value chain refers to the various stages and components involved in the development, deployment, and utilization of artificial intelligence technologies. It encompasses a range of activities, from data collection and preprocessing to algorithm development, model training, deployment, and ongoing maintenance.
The AI value chain typically includes the following components:
1. Data Acquisition: The process of gathering relevant and diverse data from various sources, which serves as the foundation for AI algorithms and models.
2. Data Preprocessing: Data cleaning, filtering, and transformation to ensure its quality, consistency, and suitability for AI applications.
3. Algorithm Development: Designing and developing algorithms that can process and analyze the collected data to extract meaningful insights or perform specific tasks.
4. Model Training: Using the prepared data to train AI models, which involves feeding the data into algorithms to optimize their performance and enable them to make accurate predictions or classifications.
5. Model Deployment: Integrating trained models into production systems or applications, making them available for real-time or batch processing.
6. Model Evaluation and Monitoring: Continuously assessing the performance and effectiveness of deployed models, monitoring their outputs, and making necessary adjustments or improvements.
7. Integration and Application: Incorporating AI capabilities into specific applications, products, or services to enhance functionality, efficiency, or user experience.
8. Ethical Considerations: Ensuring that AI technologies adhere to ethical standards, privacy regulations, and legal frameworks to protect individuals' rights and mitigate potential biases or risks.
9. Maintenance and Updates: Ongoing support, maintenance, and updates to keep AI systems performing optimally, address issues, and incorporate advancements or changes in technology.
10. Business Impact: Leveraging AI capabilities to drive business value, improve decision-making processes, automate tasks, enhance productivity, and unlock new opportunities for innovation and growth.
The AI value chain involves collaboration among various stakeholders, including researchers, data scientists, engineers, domain experts, policymakers, and business leaders, each contributing to different stages of the process. By understanding and optimizing each step in the value chain, organizations can harness the full potential of AI technologies to achieve their goals and gain a competitive edge in the digital age.
If the scientific and industrial world is in fibrillation for the actions taken by different continents, it must be remembered that the entire world economy will be affected by artificial intelligence.
The AI value chain refers to the various stages and components involved in the development, deployment, and utilization of artificial intelligence technologies. It encompasses a range of activities, from data collection and preprocessing to algorithm development, model training, deployment, and ongoing maintenance.
The AI value chain typically includes the following components:
1. Data Acquisition: The process of gathering relevant and diverse data from various sources, which serves as the foundation for AI algorithms and models.
2. Data Preprocessing: Data cleaning, filtering, and transformation to ensure its quality, consistency, and suitability for AI applications.
3. Algorithm Development: Designing and developing algorithms that can process and analyze the collected data to extract meaningful insights or perform specific tasks.
4. Model Training: Using the prepared data to train AI models, which involves feeding the data into algorithms to optimize their performance and enable them to make accurate predictions or classifications.
5. Model Deployment: Integrating trained models into production systems or applications, making them available for real-time or batch processing.
6. Model Evaluation and Monitoring: Continuously assessing the performance and effectiveness of deployed models, monitoring their outputs, and making necessary adjustments or improvements.
7. Integration and Application: Incorporating AI capabilities into specific applications, products, or services to enhance functionality, efficiency, or user experience.
8. Ethical Considerations: Ensuring that AI technologies adhere to ethical standards, privacy regulations, and legal frameworks to protect individuals' rights and mitigate potential biases or risks.
9. Maintenance and Updates: Ongoing support, maintenance, and updates to keep AI systems performing optimally, address issues, and incorporate advancements or changes in technology.
10. Business Impact: Leveraging AI capabilities to drive business value, improve decision-making processes, automate tasks, enhance productivity, and unlock new opportunities for innovation and growth.
The AI value chain involves collaboration among various stakeholders, including researchers, data scientists, engineers, domain experts, policymakers, and business leaders, each contributing to different stages of the process. By understanding and optimizing each step in the value chain, organizations can harness the full potential of AI technologies to achieve their goals and gain a competitive edge in the digital age.
EU accelerates on A.I. and chooses a risk-based approach
EU is adopting a risk-based approach, imposing restrictions based on the perceived level of risk associated with specific AI applications. It bans certain AI tools considered "unacceptable," such as analytics systems used by law enforcement to predict criminal behavior. Additionally, it introduces limits on "high-risk" technologies, including recommendation algorithms and tools that can influence elections. The legislation also focuses on generative AI, imposing obligations on companies to label AI-generated content and disclose the use of copyrighted data in training AI models.EU says that its approach to AI is proactive and aims to solidify its position as a global leader in tech regulation. It adds to the suite of existing regulatory tools aimed at Silicon Valley companies and sets standards that could influence policymakers worldwide. The alignment between European and U.S. regulators has grown stronger in recent years, as both sides recognize the need to address the power of tech giants. European officials have engaged in discussions with U.S. lawmakers on AI, sensing a greater urgency in Congress to regulate AI technologies.
The U.S. is just getting started and wants to avoid losing A.I. leadership
In contrast to the EU's progress, the U.S. Congress is only beginning to grapple with the potential risks of AI. Senate Majority Leader Charles E. Schumer has initiated efforts to craft an AI framework, citing national security concerns and the need to prevent adversaries from taking the lead in AI regulation. However, Congress is still months away from considering specific legislation. The EU's more advanced AI legislation highlights the concern among some U.S. lawmakers that the United States is falling behind in setting regulatory standards for technology. However, U.S. lawmakers are actively discussing AI policy and legislation, building on smaller policy actions that have been taken recently. These actions include bills to exclude generative AI from Section 230 liability protection and proposals for a National AI Commission and a federal office to encourage competition with China. US agencies like the FTC, Department of Commerce, and US Copyright Office have also issued statements and guidelines regarding AI, particularly generative AI.
Three key themes have emerged in these discussions:
While discussions will continue, Congress plans to form invite-only groups to delve into specific aspects of AI starting in the fall. There may be discussions on banning specific AI applications and potential revival of comprehensive tech legislation proposals. The focus is on achieving comprehensive and rapid AI regulation, drawing significant attention.
Three key themes have emerged in these discussions:
- protecting innovation is a priority, as the US is home to major AI companies
- aligning technology, especially AI, with democratic values is emphasized, differentiating US AI companies from Chinese counterparts
- the future of Section 230, which shields tech companies from content liability, remains a significant question for AI regulation
While discussions will continue, Congress plans to form invite-only groups to delve into specific aspects of AI starting in the fall. There may be discussions on banning specific AI applications and potential revival of comprehensive tech legislation proposals. The focus is on achieving comprehensive and rapid AI regulation, drawing significant attention.
A.I. in the world is growing fast
While the United States and Europe have traditionally been at the forefront of AI research and development, other countries, including China, have made significant strides in recent years. China has emerged as a major player in the AI landscape, with a strong focus on AI technology, research, and applications. The Chinese government has recognized the strategic importance of AI and has made substantial investments to foster its growth. Chinese companies, such as Baidu, Alibaba, and Tencent, have actively pursued AI advancements and have become global leaders in certain AI domains.
It's worth noting that different countries have varying approaches and priorities when it comes to AI. The United States and Europe have emphasized ethical considerations, privacy protection, and regulation as AI technology continues to evolve. In contrast, China has prioritized rapid technological development and AI deployment in various sectors, including surveillance, healthcare, and transportation.
International collaboration and knowledge-sharing in AI are crucial for driving advancements and addressing global challenges. Many AI researchers and organizations across different countries actively collaborate and contribute to the global AI community. Cooperation, exchange of ideas, and sharing best practices can lead to breakthroughs that benefit humanity as a whole.
Ultimately, AI development and its impact are not limited to specific countries or regions. It is a global endeavor with the potential to revolutionize various industries, improve daily lives, and shape the future of society worldwide.
Read in details the EU strategy on the Artificial Intelligence
It's worth noting that different countries have varying approaches and priorities when it comes to AI. The United States and Europe have emphasized ethical considerations, privacy protection, and regulation as AI technology continues to evolve. In contrast, China has prioritized rapid technological development and AI deployment in various sectors, including surveillance, healthcare, and transportation.
International collaboration and knowledge-sharing in AI are crucial for driving advancements and addressing global challenges. Many AI researchers and organizations across different countries actively collaborate and contribute to the global AI community. Cooperation, exchange of ideas, and sharing best practices can lead to breakthroughs that benefit humanity as a whole.
Ultimately, AI development and its impact are not limited to specific countries or regions. It is a global endeavor with the potential to revolutionize various industries, improve daily lives, and shape the future of society worldwide.
Read in details the EU strategy on the Artificial Intelligence
© Copyright eEuropa Belgium 2020-2023
Source: © European Union, 1995-2023
Source: © European Union, 1995-2023