Brussels, |
|
THE NEW FRONTIERS
AI Liability
Directive on Defective Products
While awaiting the approval of the Proposal for a Directive COM(2022)496 final specific to AI Liability, the replacement of Directive 85/374 on liability for defective products has been approved with a new Directive (EU) 2024/2853 to include provisions also concerning Artificial Intelligence. The Directive has been adopted on 23 October 2024.
The updated EU Product Liability Directive establishes a legal framework that holds economic operators—such as manufacturers, importers, distributors, and others—involved in the production and distribution of products liable for damages caused by defective products. For owners of AI technologies, particularly those who develop, manufacture, or supply AI-based products and services, this directive has several significant implications. Let's see the most direct and significant implications: Concrete Implications for Owners of AI Technologies under the EU Product Liability Directive
|
- Expanded Definition of "Product":
- Inclusion of Software and AI Components: The directive explicitly includes software, digital manufacturing files, and intangible components within the definition of a "product." This means that AI software, algorithms, and related digital services are subject to the same liability rules as physical products.
- Related Services: Digital services that are integrated into or interconnected with a product in such a way that their absence would prevent the product from performing its functions are considered components. This includes AI services that are essential for a product's operation.
- Liability for Defective AI Products:
- Defectiveness Criteria: A product is considered defective if it does not provide the safety that the public is entitled to expect, taking into account all circumstances, including its presentation, use, and the time it was put into circulation.
- AI-Specific Considerations: For AI technologies, defectiveness may arise from issues such as biased algorithms, inadequate training data, lack of transparency, or insufficient safeguards against misuse.
- Manufacturer's Control and Responsibility:
- Control Over Software Updates and Upgrades: Owners of AI technologies who have control over software updates or upgrades are responsible for ensuring that these updates do not introduce defects. Liability extends to damages caused by software updates, upgrades, or the lack thereof when necessary for maintaining safety.
- Substantial Modifications: If an AI product is substantially modified after being placed on the market, and the owner is responsible for that modification, they may be considered the manufacturer and held liable for any resulting defects.
- Obligations of Economic Operators:
- Manufacturers: AI technology owners who manufacture products are directly liable for damages caused by defects in their products, including defects arising from AI components or related services.
- Importers and Distributors: If the manufacturer is not established within the EU, importers and distributors can be held liable. They must ensure that the AI products they place on the market comply with safety requirements.
- Online Platforms and Fulfillment Service Providers: Under certain conditions, online platforms facilitating the sale of AI products and fulfillment service providers may also be held liable if they fail to identify the manufacturer or importer upon request.
- Burden of Proof and Disclosure Obligations:
- Disclosure of Evidence: Courts can require AI technology owners to disclose relevant evidence necessary to prove the defectiveness of a product. This may include technical documentation, design specifications, or data logs.
- Protection of Trade Secrets: While there are provisions to protect confidential information and trade secrets, owners must balance this with the obligation to provide sufficient evidence in liability cases.
- Presumption of Defectiveness: In certain circumstances, such as when a manufacturer fails to comply with disclosure obligations or when the product does not comply with mandatory safety standards, courts may presume the product to be defective.
- Limitations on Exemptions from Liability:
- Development Risk Defense: Owners cannot claim that the state of scientific and technical knowledge at the time was insufficient to discover the defect if the defectiveness arises from software, including AI algorithms, updates, or upgrades.
- Cannot Limit Liability Contractually: The directive prohibits economic operators from limiting or excluding their liability through contractual agreements with consumers or other parties.
- Time Limits for Claims:
- Limitation Period: Injured parties have a three-year period to initiate proceedings from the date they became aware of the damage, the defect, and the identity of the liable party.
- Expiry Period: Liability expires 10 years after the product was placed on the market or substantially modified. In cases where personal injury has a latency period, this can extend up to 25 years.
- Joint and Several Liability:
- Multiple Parties Held Liable: If multiple economic operators are responsible for the same damage (e.g., AI software developer and hardware manufacturer), they can be held jointly and severally liable. This means the injured party can seek full compensation from any one of them.
- Implications for Small Enterprises:
- Recourse Limitations: In some cases, larger manufacturers integrating AI components from small enterprises may have limited rights of recourse against those smaller suppliers, affecting contractual and supply chain relationships.
- Requirement to Maintain Safety Over Time:
- Ongoing Obligations: Owners of AI technologies must ensure the continuous safety of their products throughout their lifecycle, including providing necessary updates and addressing vulnerabilities that may emerge after the product is on the market.
Practical Steps for Owners of AI Technologies:
- Ensure Compliance with Safety Standards:
- Develop AI products in line with existing EU safety regulations and industry best practices.
- Implement robust testing, validation, and verification processes to identify and mitigate defects.
- Maintain Detailed Documentation:
- Keep comprehensive records of design, development, testing, and quality assurance activities.
- Document all software updates, upgrades, and modifications made to the AI product.
- Implement Effective Monitoring and Update Mechanisms:
- Establish processes for monitoring the performance and safety of AI products post-deployment.
- Provide timely updates or upgrades necessary to maintain safety and compliance.
- Prepare for Disclosure Obligations:
- Be ready to disclose relevant evidence in the event of liability claims, while safeguarding confidential information within legal allowances.
- Develop protocols for responding to legal requests for information.
- Review Supply Chain and Contracts:
- Assess agreements with suppliers, distributors, and other partners to ensure clarity regarding liability and compliance responsibilities.
- Consider the impact of the directive on indemnification clauses and risk allocation.
- Consider Insurance Coverage:
- Evaluate the need for product liability insurance that covers damages related to AI technologies.
- Engage with insurers to understand coverage options and requirements.
- Stay Informed on Legal Developments:
- Monitor updates to EU regulations related to AI and product liability.
- Engage with legal counsel specialized in technology and product liability law.
Conclusion
The updated EU Product Liability Directive significantly impacts owners of AI technologies by extending liability to include software and digital components integral to products. Owners must be proactive in ensuring their AI products are safe, compliant with regulations, and free from defects that could cause harm. This involves rigorous development practices, ongoing monitoring, and readiness to address legal obligations related to liability claims. By understanding and adhering to these requirements, AI technology owners can mitigate risks, protect consumers, and contribute to a trustworthy AI ecosystem within the European Union.
Proposal fo a Directive on AI Liability
This proposal aimed to adapt the existing the EU rules on non-contractual civil liability (Directive (EU) 2020/1828) to Artificial Intelligence. As explaind before, in the
Highlighting liability as a significant barrier for European companies using AI, in 2021 the Commission proposed a Directive to address the inadequacies of current national liability rules, which often require victims to prove fault. Given AI’s complexity, autonomy, and opacity, proving such fault can be costly and lengthy, potentially deterring victims from pursuing compensation. The memorandum reflects concerns that ad hoc judicial adaptations to AI cases create legal uncertainty and make it difficult for businesses, especially SMEs, to predict liability exposure and insure against it. This uncertainty is exacerbated for companies operating across different EU jurisdictions. The proposal aims to harmonize liability rules at the EU level, reducing business costs and legal uncertainties, and preventing the fragmentation that would arise from individual Member States creating their own AI-specific liability regulations. By doing so, it intends to facilitate the broader adoption of trustworthy AI within the internal market, ensuring victims of AI-related damage receive comparable protection to those affected by conventional products, thus fostering a more consistent and supportive environment for AI development and use across the EU. |