+44 (0) 121 582 0192 [email protected]

Introduction

The EU AI Act, is  a pivotal piece of legislation in the realm of artificial intelligence, introduces a nuanced, risk-based framework categorising AI systems into five distinct categories: Prohibited, High-Risk, Low-Risk, Minimal-Risk, and the newly added General-Purpose AI systems. This legislative structure reflects a progressive approach, where the level of regulatory scrutiny and compliance obligations escalates with the potential risk posed by the AI system.

 

Understanding the Risk-Based Approach

  1. The categorisation of AI systems under the EU AI Act reflects a diverse range of applications and impacts. Here are examples for each type:
    1. Prohibited AI Systems: These systems are banned due to their high potential for harming fundamental rights or public interests. For instance:
      • AI for Social Scoring: AI systems used by governments to rate citizens based on their behavior and personal traits.
      • Real-Time Biometric Identification Systems: AI systems used in public spaces for real-time biometric identification, especially those that could lead to mass surveillance.
    2. High-Risk AI Systems: These are critical systems requiring stringent compliance checks. Examples include:
      • Medical Diagnosis Tools: AI systems used for diagnosing diseases or recommending treatments in healthcare.
      • Recruitment Software: AI systems used for screening and evaluating job candidates, where biases could have significant impacts.
      • AI in Autonomous Vehicles: Systems used for decision-making in autonomous driving, where safety is paramount.
      • Credit Scoring AI: AI systems used by financial institutions to assess creditworthiness.
    3. Low and Minimal-Risk AI Systems: These systems pose less risk, focusing mainly on transparency. Examples are:
      • Chatbots and Virtual Assistants: AI systems used for customer service or online assistance.
      • AI-Enhanced Educational Tools: Software used in educational settings for personalised learning experiences.
      • AI in Entertainment Recommendations: Systems used by streaming services to suggest movies or music based on user preferences.
    4. General-Purpose AI Systems: These are versatile systems that can be adapted for various uses. When employed in high-risk scenarios, they are regulated as high-risk systems. Examples include:
      • Machine Learning Platforms: General AI platforms that can be tailored for specific tasks like image recognition, language processing, or predictive analytics.
      • AI Development Frameworks: Tools and libraries used to build custom AI applications for diverse industries.

    Understanding these examples helps in grasping the scope and impact of the EU AI Act, providing a clearer picture of the regulatory landscape governing AI technologies.

 

Compliance Obligations and the AI Liability Directive

The EU AI Act’s compliance obligations, coupled with the AI Liability Directive, establish a robust legal framework for the regulation and responsible deployment of AI systems in the European Union. This dual approach addresses both the preventative measures necessary for AI system providers and the remedial aspects related to AI-induced damages.

 

Compliance Obligations Under the EU AI Act

  1. Risk Assessment and Documentation: Providers of high-risk AI systems must conduct thorough risk assessments and maintain detailed documentation. This includes outlining the system’s purpose, its level of accuracy, and potential risks.
  2. Data Governance: Ensuring the quality of data used in AI systems is crucial. Providers must guarantee that data sets are unbiased, representative, and respect privacy standards.
  3. Transparency Requirements: All AI systems, particularly those with lower risks, must be transparent in their operations. Users should be informed that they are interacting with an AI system and understand its capabilities and limitations.
  4. Human Oversight: High-risk AI systems must include effective human oversight mechanisms to prevent or minimise risks. This could involve human-in-the-loop, human-on-the-loop, or human-in-command approaches.
  5. Record-Keeping: Providers must keep detailed records of AI system functionalities and operations to facilitate audits and compliance checks.
  6. Conformity Assessment: Before placing high-risk AI systems on the market, a third-party conformity assessment is required to ensure compliance with the EU AI Act’s standards.
  7. Post-Market Monitoring: Continuous monitoring of AI systems after their deployment is mandatory to identify and rectify any emerging risks or non-compliance issues.

 

The AI Liability Directive

The AI Liability Directive complements the EU AI Act by addressing the legal consequences of harm or damage caused by AI systems. Key aspects include:

  1. Rebuttable Presumption of Causality: Under certain conditions, if a claimant can demonstrate a fault and a link to damage caused by an AI system, courts may presume causality, shifting the burden of proof to the AI system provider.
  2. Disclosure of Information: Courts can order AI system providers to disclose information about high-risk systems. This is crucial for claimants in proving causality and determining liability.
  3. Scope of Liability: The Directive defines the scope of potential liabilities, including damage caused by AI systems’ outputs or failures.
  4. Interaction with National Laws: While the Directive sets a framework for AI-related liabilities, it respects the diversity of national legal systems, particularly regarding definitions of fault and causality.
  5. Documentation and Transparency for Liability: Compliance with the EU AI Act’s documentation and transparency requirements becomes instrumental in liability cases. Noncompliance could lead to a presumption of fault.
  6. Facilitation of Damage Claims: By setting clear rules on liability and information disclosure, the Directive aims to facilitate the process for victims to claim damages.

 

Timelines and Applicability

The EU AI Act’s enactment is a future-oriented process, with most provisions becoming effective 36 months post-publication in the EU’s Official Journal. However, certain sections, like AI governance systems and penalties, will be enforced 12 months post-enactment. The Act’s breadth encompasses almost every entity involved in placing or using AI systems within the EU.

 

Fines for Noncompliance

Echoing the GDPR’s approach, the EU AI Act imposes substantial fines for noncompliance well in excess of the current GDPR law:

  • Up to €30 million or 6% of annual turnover for prohibited AI practices.
  • Up to €20 million or 4% of annual turnover for noncompliance in high-risk and general-purpose AI systems, and transparency obligations in low/minimal-risk systems.
  • Up to €10 million or 2% of annual turnover for providing misleading information.

 

Increased Liability Risks

Noncompliance with the EU AI Act not only triggers financial penalties but also heightens liability risks. The AI Liability Directive leverages the EU AI Act’s definitions and transparency requirements, making noncompliance an influential factor in presumptions under the Liability Directive.

 

In conclusion,

The EU AI Act represents a significant step in regulating AI technologies, balancing innovation with ethical considerations and public safety. For businesses, it’s imperative to understand and align with these evolving regulations to not only avoid hefty penalties but also to foster trust and reliability in AI applications. The Act’s intricate framework demands a proactive approach to compliance, underscoring the importance of staying abreast with legislative developments in this dynamic field.