+44 (0) 121 582 0192 [email protected]

Introduction

The emerging landscape of Artificial Intelligence (AI) regulation in the European Union is taking a definitive shape with the introduction of the new EU AI Draft Act. This groundbreaking legislation aims to manage the risks and harness the benefits of AI technologies, emphasising the safety, transparency, and accountability of AI systems, particularly those classified as high-risk. As the Owner and CEO of Formiti Data International Ltd, I understand the significance of this development and its implications for data privacy and compliance professionals.

 

Obligations Linked to High-Risk AI Systems

The EU AI Draft Act categorises specific AI systems as high-risk due to their potential impact on fundamental rights and safety. These systems are subject to stringent regulatory requirements across various stakeholders:

 

a. Providers’ Obligations

Providers of high-risk AI systems bear the brunt of responsibilities. They must establish a comprehensive risk management system, ensuring the AI system’s design and development minimise risks. This involves using high-quality data sets, maintaining detailed technical documentation, and implementing a robust quality management system. Transparency is paramount, with providers required to offer clear documentation and instructions, enabling users to understand and control the AI system’s outputs. Human oversight is mandated to mitigate health, safety, and fundamental rights risks. Additionally, providers must ensure the AI systems’ robustness, accuracy, and cybersecurity. For providers outside the EU, appointing an EU-based authorised representative is mandatory.

b. Users’ Responsibilities

Users of high-risk AI systems must adhere to the provider’s instructions, actively monitor the system for anomalies, and maintain input data records. This proactive engagement ensures the systems are used responsibly and within regulatory boundaries.

c. Importers’ and Distributors’ Role

Importers must verify that the high-risk AI system has undergone the necessary conformity assessment and possesses the appropriate documentation before market placement. Distributors, similarly, are tasked with ensuring the systems have the required CE marking and are accompanied by necessary documentation and instructions.

d. Expanded Definition of ‘Provider’

Users, importers, distributors, or third parties can be classified as ‘Providers’ under the Draft Act in specific scenarios. This occurs if they place the AI system on the market under their brand, modify its intended purpose, or make substantial modifications. In such cases, the original provider’s obligations transfer to them.

 

Conformity and Registration Process

The Draft Act mandates a conformity assessment for high-risk AI systems, with some requiring third-party assessment. Providers must issue an EU declaration of conformity and update it as necessary. The AI system must be registered in the EU database before market placement or service initiation.

 

Post-market Monitoring and Reporting

A critical aspect of the Draft Act is continuous post-market monitoring to collect and analyse performance data. Providers must report any serious incidents, malfunctions, or breaches of obligations to national competent authorities.

 

Penalties for Non-compliance

The Draft Act stipulates substantial fines for non-compliance, including penalties for placing a blacklisted AI system on the market, failing in cooperation duties, or providing misleading information. These fines can reach up to EUR 30 million or 6% of the worldwide annual turnover, underscoring the seriousness with which the EU views AI system regulation.

 

Conclusion

The EU AI Draft Act represents a significant step towards regulating AI technologies, focusing on high-risk systems. For data privacy professionals, understanding these regulations is crucial. The Act reinforces the need for robust data governance, risk management, and compliance strategies. As we help clients navigate these complexities, it’s vital to stay abreast of these developments, ensuring that AI systems are innovative but also safe, transparent, and accountable. The Act is a clear signal that the era of unregulated AI is ending, ushering in a new age of responsible and ethical AI use.

The Formiti AI Assessment is an excellent tool to help your AI project achieve and maintain EU compliance with the AI Act and guard your organisation against the large fines  of  ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

Keep an eye out for our upcoming series of articles on this draft legislation, where we’ll dissect it into digestible sections, aiding your readiness for AI compliance.