Out-Law / Your Daily Need-To-Know

Out-Law News 4 min. read

ICO makes data protection recommendations on AI recruitment tools


Employers in the UK who use artificial intelligence (AI) tools for recruitment need to pay attention to their data protection obligations and have oversight of the AI provider’s data protection compliance process, a recent report by the UK’s data protection regulator has urged.

The report (50-page / 514KB PDF), published by the Information Commissioner’s Office (ICO), sets out key findings and recommendations following consensual audits it has carried out with AI providers of sourcing, screening and selection recruitment tools. Recommendations in the report are addressed to both AI providers and recruiters.

In the report, the ICO recognises that “shifting the processing of personal information to these complex and sometimes opaque systems comes with inherent risks to people and their privacy”. It also accepts that new technology can be greatly beneficial and can actually improve fairness in processing by improving consistency and timely responses. The regulator intends to build regulation that facilitates the use of technology and grows public confidence at the same time.

Employers have particularly been reminded of their obligations to protect personal data even if they use AI tools for recruitment or other workforce applications. The ICO recommends that, to ensure the tools are used lawfully, employers not only need to have oversight of their AI providers’ data protection compliance processes, but also need to have appropriate processes of their own. In addition to recommendations, the report also contains case studies, which offer useful context around recommended practices and processes.

Stephanie Paton, an employment law expert at Pinsent Masons, said that the multitude of recommendations made by the ICO demonstrates the complexity of legal compliance for employers using AI tools in recruitment.

“Employers should understand that an AI solution is not a quick efficiency fix. However, if compliance work is put in at the outset with a trusted AI provider, with good ongoing monitoring, the efficiencies and other benefits of AI can be reaped in a properly regulated environment. This is what the ICO wants to encourage,” she said.

According to the report, there is a strong agreement among participants in the project that the ICO’s audit and recommendation process improved their understanding of data protection requirements. Paton noted that employers should carry out a similar internal audit process. 

“It seems unlikely that project participants are alone in conceding that their understanding of AI and data protection needed to be improved. Employers looking at this report may also want to be reflective on their current understanding. Carrying out a similar internal audit process may be helpful to assess not only legal risks, but whether tools are working in the best way for the business,” she said.

The ICO’s recommendations focus on several areas, including issues around data minimisation and purpose limitation. The ICO was positive in its findings around developers assessing the minimum personal information needed to operate AI tools effectively. However, it has identified some compliance gaps when repurposing personal data that had been originally collected for a different purpose.  The report also finds that while some data retention policies were in place, there was sometimes room for improvement and the ICO encourages ‘weeding’ or deleting personal information that is no longer needed, especially that which is likely to be inaccurate or out-of-date.

The report makes certain recommendations relating to using personal information to train and test AI. As part of its effort to ensure AI tools produce consistent and reliable outputs without bias, the ICO wants recruiters to seek assurances from developers around the steps taken to minimise bias around personal characteristics and whether the provider is using the employer’s own candidates’ personal information to train, test or develop their AI.

The ICO’s audits found that, in most cases, AI providers considered the accuracy of their AI tools during development and periodically after launch. However, in one case an AI provider had not formally assessed the accuracy of its AI tool and instead relied on its AI tool being ‘at least better than random’. Although bias monitoring was generally commonplace, weakness was identified in relation to protected characteristics other than gender, ethnicity and age. There was also a failure to recognise that inferring people’s personal characteristics from their personal data was creating ‘special category’ data with additional processing obligations. In response to these shortcomings, the report sets out some suggestions on how this can be improved.

Weakness in transparency obligations were observed by the regulator, especially in relation to insufficient privacy notices and a lack of clarity as to whether the AI provider or recruiter was taking responsible for informing candidates how they were processing their personal information. Insufficient detail in data protection impact assessments were another area of concern. Steps to improve these points are recommended.

The report also highlights the importance of human oversight in ensuring automated recruitment decisions are not relied upon where AI tools were not designed for that purpose. If automated decisions are made, there should be a simple way for candidates to challenge them. Other recommendations in the report include steps employers and AI providers can take to improve data security and data breach processes, and the importance of data protection governance structures within an organisation.

“Employers should monitor for further developments from the ICO. The report does not include AI used to process biometric data, such as emotion detection in video interviews, and separate guidance on biometric data will be produced. It also did not cover tools using generative AI, such as for chatbots and drafting job adverts or role descriptions, but the ICO will continue to explore risks around this,” said Paton.

She added that employers reviewing data protection and AI compliance should also ensure that these efforts are joined up with compliance processes under equality laws as there is clear overlap, and AI is also a stated focus of the Equality and Human Rights Commission (EHRC).

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.