+44 (0) 121 582 0192 [email protected]

Introduction

Regulation typically follows innovation, and the AI sector is no different The EU’s Artificial Intelligence Act (AI Act) is a world premiere, marking a significant step in the global regulation of artificial intelligence. After extensive debate, the EU’s AI Act was published in the Official Journal on July 12 and will take effect on August 1, 2024, with most provisions phased in by August 2027. This Act will significantly impact many businesses, introducing new compliance obligations. While the general framework is established, some key definitions and concepts are still unclear. Regulatory guidance will be crucial for businesses to fully grasp their responsibilities and liabilities

 

Extraterritorial Reach

The AI Act has a notable extraterritorial reach. t applies to (1) providers, including those based outside the EU, who place AI systems or general-purpose AI (GPAI) models on the EU market or “put them into service” within the EU. and (2) deployers, which have their place of establishment, or are located within the EU. Importantly, the Act will also apply to both providers and deployers to the extent that the “output” of the AI system is “used in the EU.” This means that non-EU companies must account for the AI Act in their operations if their AI systems impact the EU market. The EU has implemented a four-tier risk-based classification system, with specific obligations and restrictions corresponding to the assessed level of risk. The central focus of the AI Act is on “high-risk” AI systems.

 

Risk-Based Application

The EU has adopted a four-tier risk-based classification system, with corresponding obligations and restrictions depending on the level of risk as assessed by the EU. The AI Act includes a detailed list of “high-risk” AI systems in its annex, and the classification of these systems can be updated over time to keep pace with technological and market changes.The AI Act includes a detailed list of “high-risk” AI systems in its annex, and the classification of these systems can be updated over time to keep pace with technological and market changes.

 

Obligations on Providers and Deployers of High-Risk AI Systems

The European Artificial Intelligence Board (EAIB), established by the European Commission (EC), will oversee the enforcement of the AI Act. Providers and deployers of high-risk AI systems must meet a series of obligations, including:

  • Training Obligations: Ensuring AI literacy training for staff and appointing AI overseers within the organization.
  • Operational Duties: Implementing technical and organizational measures to keep the AI safe, managing the quality of input data for training.
  • Control Obligations: Avoiding prohibited AI, ensuring human oversight, monitoring training data, and complying with the General Data Protection Regulation (GDPR).
  • Documentation Obligations: Conducting impact assessments where needed.

 

Overview of the AI Act

Risk Levels and Corresponding Obligations

  • Minimal Risk AI: No restrictions are placed on minimal-risk AI, such as video games or spam filters.
  • Limited Risk AI: These AI systems, like chatbots, have specific transparency obligations. Providers must inform users that they are interacting with AI and ensure AI-generated content is identifiable.
  • High-Risk AI: High-risk AI systems, found in areas such as biometric identification, critical infrastructure management, and law enforcement, have strict compliance obligations. Providers must establish risk management systems, ensure data quality, maintain technical documentation, and implement transparency measures.

 

Prohibition of AI with Unacceptable Risk

The AI Act prohibits AI applications that pose potential threats to fundamental rights and democracy. These include AI systems that manipulate behavior using deceptive techniques, exploit vulnerabilities, or involve untargeted scraping of facial images for recognition.

 

Key Takeaways for Business Today

While various provisions will enter into force in stages, businesses must prepare now to comply with their obligations as AI providers or deployers. Providers of products subject to existing EU regulations may face additional requirements through the AI Act. The EC has the power to adopt delegated acts, which can change key provisions and definitions. Guidance from the EC on several key issues is still pending, placing businesses in a challenging position as they await further details.

 

Conclusion

The EU Artificial Intelligence Act represents a significant regulatory milestone, setting a global precedent for AI governance. Its impact will be far-reaching, affecting providers and deployers both within and outside the EU. Businesses must stay informed and proactive in adapting to these new regulations to ensure compliance and mitigate risks. As the landscape of AI continues to evolve, the AI Act will play a crucial role in shaping the future of artificial intelligence in Europe and beyond.