+44 (0) 121 582 0192 [email protected]


A key priority of the EU AI Act is transparency due to its ability to enable citizens to understand AI systems’ design and use, as well as enable accountability for decisions made by companies and public authorities. Transparency is also essential for creating public trust in AI systems and ensuring their responsible deployment. To meet the requirements of transparency, the EU AI Act mandates the disclosure of certain information to individuals and the public. However, implementation of these obligations can prove to be difficult due to potential issues.


Transparency Applicability to All AI Systems

The latest version of the EU AI Act, adopted on 14 June 2023, enshrines transparency as a fundamental principle under Article 4a. This encompasses the need for AI systems to be developed and used in a manner that allows for traceability, explainability, and clear communication with users. This principle is not only essential for user understanding but also pivotal in ensuring that the systems’ decisions are comprehensible and the underlying logic is transparent.


Transparency in High-Risk AI Systems

For high-risk AI systems, the EU AI Act, under Article 13, stipulates an enhanced level of transparency. This entails that such systems must be sufficiently transparent in their operation, enabling both providers and users to reasonably comprehend the system’s functioning. Additionally, these systems should be complemented by human-machine interface tools to facilitate effective oversight, as mandated by Article 14.


Targeted Transparency Obligations

Article 52 of the EU AI Act introduces specific transparency obligations for certain AI systems. This includes mandatory disclosure for AI systems interacting with individuals, emotion recognition or biometric categorisation systems, and deep fake-generating systems. It’s crucial that these disclosures are made at the time of the first interaction or exposure to the system. When AI systems are designed for interactions with individuals, it is mandatory to inform these individuals that they are engaging with an AI system.

  • In cases where AI systems involve emotion recognition or biometric categorisation, individuals must not only be informed about the system but also give their explicit consent before their biometric data is processed.
  • For AI systems capable of creating deep fakes, there is a strict requirement to clearly indicate that the content they produce or modify is artificially generated or manipulated.
  • Importantly, these disclosures must be made to individuals no later than their first interaction with or exposure to the AI system, ensuring clarity and consent from the outset.


Foundation Model Providers and Transparency

The Parliament’s position on foundation models introduces specific transparency requirements for providers of generative AI. These include complying with Article 52’s obligations, preventing content that violates Union law, and publicly documenting the use of copyrighted training data.

  • The EU AI Act, as per the Parliament’s perspective, has introduced the concept of “foundation model,” focusing on specific transparency requirements. These apply particularly to providers of foundation models that are used for content generation, also known as “generative AI.”
  • Under Article 28b, providers operating with generative foundation models or AI systems developed by specializing these models are obliged to:
    • Adhere to the transparency guidelines as outlined in Article 52 of the EU AI Act,
    • Implement preventive measures against the creation of content that breaches Union law,
    • Publicly disclose and provide a summary of how copyrighted training data is utilized in these systems.


Data Governance and Systematic Transparency

The EU AI Act’s emphasis on data governance is critical. Article 10 mandates that high-risk AI systems must be transparent regarding the original purpose of data collection. This ensures that data repurposed for AI training is used transparently and ethically. This implies that whenever data is initially gathered for a different purpose but later used in the training, validation, or testing phases of an AI system, the original intent behind this data collection must be clearly disclosed.


The Database as a Systematic Transparency Measure

The Act proposes a public database for high-risk AI systems, maintained by the Commission. This database serves as a centralised resource for scrutinising high-risk AI systems, reducing the burden on companies while enhancing transparency.



The EU AI Act’s provisions on transparency are intricate and multifaceted, requiring businesses and public authorities to diligently assess and apply them. While these measures are essential for building trust and ensuring responsible AI deployment, their complexity necessitates a thorough understanding and careful implementation. Our comprehensive Privacy Audit Service along with a board-level report includes a comprehensive AI section.