7 May 2025

Recruiting with Robots: Using AI Recruitment Tools – the GDPR Implications

By Sue White, Information Governance Manager

How can organisations strike the perfect balance between embracing innovative AI recruitment tools and staying compliant with the law?

Employers involved in recruitment often feel overwhelmed by the workload of managing vast numbers of applications for single roles. It is no wonder that many organisations have turned to AI recruitment tools to assist with these time and labour-intensive tasks.

AI recruitment tools can assist with sourcing potential candidates via the internet and on social media platforms, screening applications, scoring, and shortlisting candidates, contacting applicants and even undertaking initial interviews. Whilst it’s easy to see that the use of these AI tools can benefit employers, it’s important to consider how this might unfairly impact decisions about candidates and how it may lead to risks to their privacy and information rights.

Key GDPR Considerations for using AI Recruitment Tools

  1. Lawfulness and Transparency
    This includes clear and detailed communication to candidates about how AI processes their personal data. As the Data Controller, the recruiting organisation must explain how these tools work, rather than solely relying on AI providers for transparency. It is important that organisations engage with their AI provider to help them understand how the tools may process candidates’ personal data so they can explain this adequately and in a manner which is clear.

Organisations must have (and document) their lawful reason for processing personal data and be upfront about their use of AI tools in recruitment.

  1. Fairness and Accuracy Without Bias
    Organisations must manage personal data in ways that respect individuals’ reasonable expectations and adhere to the Fairness Principle. However, AI’s reliance on its training data can introduce inaccuracies and biases, such as inferring gender or ethnicity from a candidate’s name or estimating age based on education history. Such unreliable practices risk perpetuating unfair decisions, which is especially high-risk when handling vast quantities of applications.

Organisations should test their AI tools for bias and review outputs regularly to avoid discrimination.

  1. Data Minimisation
    While AI thrives on extensive data to improve its performance, data protection laws require that only necessary personal information is processed. The ICO’s audit in November 2024  found that some AI recruitment tools collected far more personal information than was needed and, in some cases, scraped and combined personal data from job networking sites and social media to build databases, without the knowledge of the data subjects or the recruiters. To avoid this misuse of personal data.

Organisations should balance their needs carefully, document their rationale and work together with their AI tool providers and periodically review their data usage.

  1. Decision-Making Accountability
    AI tools can assist recruiters with scores or insights, but human reviews are essential to ensure accuracy and fairness. Candidates have a lawful right to an easy and clear process to enable them to object to automated decisions and request human intervention when these decisions significantly affect them.

Organisations must ensure that recruitment decisions are not fully automated and include appropriate human oversight.

  1. Data Protection Impact Assessments (DPIAs)
    The innovative nature of AI tools often poses high risks to individuals’ rights, making DPIAs a legal necessity. These assessments help organisations identify risks, evaluate alternatives, and implement safeguards. Regular reviews are vital to maintaining compliance.

Organisations must document their AI practices using Data Protection Impact Assessments (DPIAs).

  1. Security Measures
    From vulnerability scanning to technical controls, robust measures must protect both the personal data and the algorithms themselves.

Organisations must prioritise the security of AI systems and ensure adequate testing and review.

Responsible Recruitment with AI

By understanding and addressing these considerations, organisations can harness AI’s potential while staying aligned with legal obligations and ethical principles. AI may not replace human recruiters, but it can be a powerful ally when used thoughtfully and responsibly.

Naomi Korn Associates will soon be launching a template DPIA for AI projects, sign up to our newsletter to receive this or check our resources page for more information: Resources — Naomi Korn Associates

Further help

If you found this useful and want to further enhance your knowledge of data protection considerations when using AI tools, book your place on our AI and Information Law: Privacy and Ethical Considerations course now: 24 September 2025, 9:30am-1pm

We also provide CDP UK Accredited training on other aspects of data protection through our Intermediate and Advanced Certificates. For further information please view our courses or contact our Training Manager at info@naomikorn.com.

Recent News

Back to News

Discover more from Naomi Korn Associates

Subscribe now to keep reading and get access to the full archive.

Continue reading