+44 (0) 121 582 0192 [email protected]


The recent guidance from the Department for Education (DfE) on utilising generative artificial intelligence (AI) in education, such as large language models (LLMs) like ChatGPT or Google Bard, presents a significant shift in the educational landscape. This movement aligns with the UK government’s white paper advocating a pro-innovation approach to AI regulation and establishing the Frontier AI Taskforce.

Generative AI, capable of producing diverse content, including text, audio, images, and videos, offers immense opportunities for the education sector. It can notably reduce workload, facilitate the creation of educational resources, and enhance learning experiences. However, in AI-driven schools,  these advances also bring challenges, particularly concerning data privacy and protecting sensitive information.


Understanding the Implications for Data Privacy

Schools are custodians of a vast amount of personal and sensitive data belonging to students and staff. With the advent of generative AI tools in educational settings, ensuring that any data used complies with data protection laws is crucial. The DfE’s focus on this aspect underlines the need for schools to be vigilant about how AI tools are deployed and how data is processed.


Steps Schools Should Take

  1. Review Data Handling Protocols: Per existing data protection legislation, ensure personal and sensitive data is processed. This involves assessing how generative AI tools handle data and whether they align with the school’s data privacy policies.
  2. Cyber Security Enhancement: Given the sophisticated nature of AI, schools should fortify their cyber security measures. This includes adhering to established cyber standards and updating policies to address potential AI-related threats.
  3. Intellectual Property Awareness: Schools must be conscious of the intellectual property (IP) rights associated with content created by pupils and staff. It’s vital to obtain proper consent before using original work to train AI models.
  4. Educate and Inform: Both staff and students should be made aware of the implications of using generative AI, including the limitations and potential biases of the technology. Understanding the landscape of AI is crucial in leveraging its benefits while mitigating risks.
  5. Policy Adaptation: Educational institutions should adopt policies, such as homework and assessment guidelines, to account for the availability and use of generative AI tools, ensuring the integrity of academic work.
  6. Update school privacy notices: for transparency on the use of AI in processing the data of staff, students, parents, and guardians.


Embedding a Culture of Responsibility

Embedding a culture of responsibility and awareness is essential. This involves training educators and students to critically evaluate AI-generated content and understand the nuances of data privacy and security. Schools must also ensure that their use of technology aligns with the overarching goal of delivering quality education.


Looking Ahead

As the DfE continues collaborating with experts in shaping the future use of generative AI in education, schools must stay proactive and informed. Balancing the innovative potential of AI with the need to protect personal data and uphold educational standards is the cornerstone of this new era in education.

In conclusion, while generative AI offers transformative prospects for the education sector, it is accompanied by significant responsibilities regarding data privacy and security. Schools must navigate this terrain with a comprehensive understanding of the technology, adherence to legal frameworks, and a commitment to protecting their students and staff’s personal and sensitive data. This approach will not only harness the potential of AI but also ensure a safe, secure, and forward-looking educational environment.