+44 (0) 121 582 0192 [email protected]


In an era where Artificial Intelligence (AI) is becoming increasingly pervasive, understanding its legal framework, particularly in relation to the General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA 2018), is essential. This article delves into the complexities of how these laws intersect with AI technologies, outlining the responsibilities and obligations for compliance.


The Interplay Between AI and Data Protection Law

AI’s role in processing vast amounts of personal data puts it firmly within the scope of the GDPR and the DPA 2018. While AI applications like weather forecasting fall outside this realm, AI systems that train, test, or deploy using personal data require strict adherence to data protection laws. This includes decisions or predictions about individuals, which are treated as personal data under the GDPR.


Technology-Neutral Legislation

Interestingly, the GDPR and the DPA 2018 are technology-neutral; they do not explicitly mention AI. However, they focus significantly on automated processing, profiling, and automated decision-making. This encompasses AI technologies, especially when used for predictions or recommendations about individuals.


Rights and Obligations Under GDPR

The GDPR establishes several rights and obligations in the context of AI-assisted decisions:

  1. Right to be Informed: Individuals have the right to know about automated decision-making processes, including the logic, significance, and potential consequences involved (Articles 13 and 14).
  2. Right of Access: This includes access to information on automated decision-making and its rationale (Article 15).
  3. Right to Object: Individuals can object to processing their personal data, particularly for profiling (Article 21).
  4. Rights related to Automated Decision-Making: This right ensures individuals are not subjected to solely automated decisions without safeguards like human intervention and the ability to contest decisions (Article 22).
  5. Data Protection Impact Assessments (DPIAs): Organisations must conduct DPIAs for processing personal data using new technologies like AI, particularly when it poses a high risk to individuals (Article 35).


Explanations in AI-Assisted Decisions

The GDPR mandates that individuals receive explanations for AI-assisted decisions that significantly affect them. This is crucial where decisions are made without human involvement. Compliance involves providing meaningful information about the logic, significance, and consequences of such decisions.


The Principles of Fairness, Transparency, and Accountability

The GDPR principles of fairness, transparency, and accountability are particularly relevant to AI. Fairness involves assessing how the use of personal data affects individuals, ensuring their autonomy and self-determination. Transparency requires openness about how and why personal data is used in AI systems. Accountability involves demonstrating compliance with these principles, including providing explanations for AI-assisted decisions.


Additional Considerations under DPA 2018

The DPA 2018 provides further provisions for solely automated decisions in law enforcement (Part 3) and by intelligence services (Part 4). These include rights to human intervention and explanations of decisions, although national security exemptions may apply.



As AI continues to advance, aligning its applications with GDPR and DPA 2018 becomes increasingly critical. Organisations must navigate these regulations with an understanding of the rights and obligations they entail. This includes conducting DPIAs, ensuring transparency, fairness, and accountability in AI-assisted decisions, and respecting the data subject’s rights. Adhering to these principles not only ensures legal compliance but also builds trust and credibility in the age of AI and data protection.