Brussels, |
|
EU Seeks Clarifications from Snapchat, TikTok, and YouTube: Possible Sanctions Loom
EU intensifies scrutiny of AI algorithms on major social media platforms to address risks related to elections, mental health, and harmful content.
The EU Commission intensified its scrutiny of social media platforms Snapchat, TikTok, and YouTube by issuing Requests for Information (RFIs) under the Digital Services Act (DSA). The EU seeks details on how these platforms' AI-driven algorithms recommend content and address risks related to elections, mental health, and the protection of minors. The platforms must provide this information by November 15, 2024. Failure to comply could lead to formal investigations and fines of up to 6% of global turnover. This action highlights the EU’s ongoing efforts to regulate AI systems on large platforms to mitigate the spread of harmful content.
On October 2, 2024, the European Union intensified its scrutiny of social media platforms Snapchat, TikTok, and YouTube by issuing requests for Information under the EU Digital Services Act (DSA). The European Commission's Requests of Information (RFI's) aim to gather more details on how these platforms’ AI-driven algorithms recommend content, with a particular focus on systemic risks, such as threats to the electoral process, mental health, and the protection of minors.
The Commission emphasized that these platforms' algorithms, which are designed to maximize user engagement, can potentially amplify harmful content. This includes leading users down content "rabbit holes" that foster addictive behavior or spread disinformation, and contributing to the proliferation of illegal material, including hate speech and illegal drug promotion.
Snapchat and YouTube have been asked to provide detailed information on the specific parameters their algorithms use to recommend content, as well as the steps taken to mitigate risks associated with their recommender systems. TikTok, in particular, is being scrutinized for how it prevents malicious actors from manipulating its platform and for its efforts to reduce risks tied to elections, media pluralism, and civic discourse.
The Commission's RFI's underscores concerns that AI algorithms can amplify these risks, and all three platforms are required to submit the requested information by November 15.
Under the DSA, these platforms, classified as Very Large Online Platforms (VLOPs), are obligated to assess and manage risks stemming from their use of AI-based systems. Failure to comply could result in formal proceedings and fines of up to 6% of global annual turnover. Furthermore, Article 74 of the DSA allows the Commission to impose additional penalties for incorrect or incomplete responses to the requests.
The EU has already initiated non-compliance investigations against other major platforms, including Facebook, Instagram, AliExpress, and TikTok. (read our previous article). This latest round of RFIs signals the EU's continued focus on enforcing DSA provisions and ensuring that social media companies prioritize user safety, especially regarding minors and content related to elections.
These inquiries could shape the future regulation of AI-driven algorithms across the digital ecosystem, as the Commission aims to address the potential risks posed by these systems during sensitive periods like elections and in safeguarding mental health.
The Commission's ongoing focus on algorithmic transparency and safety in the digital space marks its significant step in holding tech companies accountable for the broader societal impact of their platforms.
The Commission emphasized that these platforms' algorithms, which are designed to maximize user engagement, can potentially amplify harmful content. This includes leading users down content "rabbit holes" that foster addictive behavior or spread disinformation, and contributing to the proliferation of illegal material, including hate speech and illegal drug promotion.
Snapchat and YouTube have been asked to provide detailed information on the specific parameters their algorithms use to recommend content, as well as the steps taken to mitigate risks associated with their recommender systems. TikTok, in particular, is being scrutinized for how it prevents malicious actors from manipulating its platform and for its efforts to reduce risks tied to elections, media pluralism, and civic discourse.
The Commission's RFI's underscores concerns that AI algorithms can amplify these risks, and all three platforms are required to submit the requested information by November 15.
Under the DSA, these platforms, classified as Very Large Online Platforms (VLOPs), are obligated to assess and manage risks stemming from their use of AI-based systems. Failure to comply could result in formal proceedings and fines of up to 6% of global annual turnover. Furthermore, Article 74 of the DSA allows the Commission to impose additional penalties for incorrect or incomplete responses to the requests.
The EU has already initiated non-compliance investigations against other major platforms, including Facebook, Instagram, AliExpress, and TikTok. (read our previous article). This latest round of RFIs signals the EU's continued focus on enforcing DSA provisions and ensuring that social media companies prioritize user safety, especially regarding minors and content related to elections.
These inquiries could shape the future regulation of AI-driven algorithms across the digital ecosystem, as the Commission aims to address the potential risks posed by these systems during sensitive periods like elections and in safeguarding mental health.
The Commission's ongoing focus on algorithmic transparency and safety in the digital space marks its significant step in holding tech companies accountable for the broader societal impact of their platforms.
© Copyright eEuropa Belgium 2020-2024
Source: © European Union, 1995-2024
© Copyright eEuropa Belgium 2020-2024
Source: © European Union, 1995-2024