+44 (0) 121 582 0192 [email protected]

Introduction

 

The European Union (EU) is on the cusp of introducing groundbreaking regulations that will redefine the landscape of artificial intelligence (AI) technology within its borders. CURRENTLY IN THE PROPOSAL STAGE, the EU Artificial Intelligence (AI) Act marks a significant milestone in the global effort to regulate AI. This ambitious legislation introduces a three-tiered approach to categorising AI applications based on their level of risk. In this article, we delve into the implications of this regulatory framework and its impact on manufacturers of connected products.

 

Three-Tiered Approach to AI Regulation

 

The EU AI Act’s innovative approach to AI regulation involves classifying AI applications into three distinct risk categories, each subject to different levels of oversight and control.

 

1: Unacceptable Risk: Applications with the Potential for Abuse

 

The first category, “Unacceptable Risk,” is reserved for AI applications with the potential for severe abuse and harm. Notably, this includes government-controlled social scoring systems. Under the EU AI Act, such applications would be outright prohibited. This stringent stance underscores the EU’s commitment to safeguarding individual rights and preventing the misuse of AI for controlling or discriminating against citizens.

 

By prohibiting the use of AI for social scoring, the EU seeks to protect individuals from the undue intrusion of their privacy and personal freedoms. This sends a clear message that certain applications, regardless of their technological advancements, will not be tolerated if they pose a threat to society.

 

2: High Risk: Rigorous Regulation and Oversight

 

The second category, “High Risk,” encompasses AI applications that, while not inherently unacceptable, present substantial potential risks. An example of this category is CV-scanning tools used for applicant assessment. The EU AI Act proposes rigorous regulations and oversight mechanisms for such applications to mitigate potential risks.

 

These regulations may include mandatory third-party assessments, transparency requirements, and safeguards against bias and discrimination in decision-making processes. Manufacturers and developers of high-risk AI systems will be obligated to comply with these comprehensive measures to ensure the responsible deployment of their technologies.

 

3: All Others: General AI Regulations

 

AI applications that do not fall into the “Unacceptable Risk” or “High Risk” categories will be subject to general AI regulations. These regulations aim to establish a baseline level of accountability and safety for all AI systems, regardless of their specific use cases. This ensures that even less risky AI applications are developed and deployed in a manner that respects fundamental ethical principles and safeguards individuals’ rights.

 

Impact on Manufacturers of Connected Products

 

Manufacturers of connected products, such as Smart TVs, wearables, and AI voice assistants, will face significant challenges and responsibilities in light of the EU AI Act’s regulations. The act’s focus on data protection and AI safety standards will directly affect how these companies collect and process data.

 

To comply with these stringent requirements, manufacturers will need to:

 

a) Prioritize Data Protection: Manufacturers must implement robust data protection measures to safeguard user data. This includes ensuring data is collected, processed, and stored securely, with transparency and user consent at the forefront of their practices.

 

b) Address AI Safety: Manufacturers must consider the safety of AI algorithms used in their products. Rigorous testing, monitoring, and evaluation will be essential to minimize risks and biases in AI-driven systems.

 

3. Stay Informed and Adapt: Manufacturers should stay abreast of evolving AI regulations and be prepared to adapt their products and practices as new guidelines emerge.

 

Conclusion

 

The EU Artificial Intelligence (AI) Act’s three-tiered approach to AI regulation reflects the EU’s commitment to promoting responsible AI innovation while safeguarding individual rights and societal well-being. It sends a clear message that certain applications with unacceptable risks will not be tolerated, and high-risk applications will be subject to stringent oversight. Manufacturers of connected products must proactively adapt to these regulations, prioritizing data protection and AI safety to ensure compliance and maintain consumer trust in the evolving landscape of AI technology.

 

What is the latest news 

 

On June 14, 2023, the European Parliament took a significant step by endorsing its negotiation stance regarding the proposed Artificial Intelligence Act. The decision garnered substantial support, with 499 votes in favor, 28 opposed, and 93 abstentions. Additionally, the Parliament introduced amendments to the list of intrusive and discriminatory applications of AI systems. The revised list now encompasses the following:

  1. Real-time remote biometric identification systems deployed in publicly accessible areas.
  2. Post remote biometric identification systems, with the sole exception being for law enforcement purposes related to the investigation of serious crimes and strictly following judicial authorization.
  3. Biometric categorization systems reliant on sensitive attributes such as gender, race, ethnicity, citizenship status, religion, and political orientation.
  4. Predictive policing systems that rely on profiling, location data, or past criminal behavior for law enforcement purposes.
  5. Emotion recognition systems utilized in law enforcement, border management, workplaces, and educational institutions.
  6. The indiscriminate harvesting of biometric data from social media or closed-circuit television (CCTV) footage for the creation of facial recognition databases, a practice that contravenes human rights and the right to privacy.

Do we have the final text of the Artificial Intelligence Act?

No, this is not the final text.

 

What Comes Next?

The next step involves negotiations between the Parliament, the EU Council, and the European Commission in what is known as the trilogue process. The primary goal of a trilogue is to achieve a provisional agreement on a legislative proposal that satisfies both the Parliament and the Council, who serve as co-legislators. The Commission plays a pivotal role as a mediator, facilitating consensus between these co-legislators. Following this provisional agreement, it must then undergo the formal approval procedures within each of these institutions.