Stock image showing a mature man’s face looking into a large computer screen as type is being added to the screen by an Artificial intelligence, AI, chatbot.

The use of artificial intelligence in recruitment

As outlined in our previous article, regulators are taking enforcement action against organisations that are misusing technology to excessively monitor employees or use data in an intrusive manner that lacks transparency. Such risks are particularly relevant in the context of artificial intelligence (AI) in the workplace, perhaps even more so during the recruitment process. Given how quickly AI is developing – and how regularly it is being deployed – this is something that regulators will no doubt be keeping a close eye on.

To help organisations get to grips with the responsible use of AI technology in HR and recruitment, the Department for Science, Innovation & Technology, along with other organisations, published guidance at the end of March, which provides practical tips and recommended practices for businesses looking to integrate AI into their recruitment processes.

The guidance is user-friendly and acknowledges that AI tools in HR and recruitment can simplify existing processes and promote greater efficiency, scalability and consistency, however, they pose novel risks such as digital exclusion (for applicants who may not be proficient in, or have access to, technology), perpetuating existing biases and discrimination.

The guidance focuses on managing risk at the procurement and deployment stages of AI investment. In terms of procurement, it sets out a number of measures (with a particular focus on risk assessments) which organisations can implement as assurance mechanisms to satisfy themselves that the AI tools are legally and ethically compliant. Other suggested measures include bias audits and performance testing to assess the integrity of the product and appraise the accuracy of any claims made by the supplier.

Once an organisation has procured a third-party AI system, the guidance makes a number of recommendations on measures to be put in place before deploying the product. This includes undertaking a pilot involving a diverse range of users, ensuring employees are sufficiently trained, assessing performance against equalities outcomes, and the need to consider and plan what reasonable adjustments may need to be made.

Finally, once the AI system is in place, it’s recommended that ongoing monitoring should take place to ensure compliance and identify any issues which may need resolving. This may require the co-operation of the supplier of the AI system so organisations should also consider the on-going support requirements from the supplier.

From an employment law perspective, the risk of discrimination in adopting AI for HR/recruitment processes is prevalent (particularly as a result of learnt bias). A tool which is tainted by bias risks perpetuating existing prejudices by continuing recruitment trends which leave certain groups underrepresented. Another discrimination risk is that AI systems may put disabled applicants at a substantial disadvantage, and therefore organisations need to be mindful of their duty to make reasonable adjustments to the process to ensure disabled applicants can participate on an equal basis.

In terms of addressing such discrimination risks, the guidance contains pragmatic measures which organisations can adopt to manage those risks. As well as impact assessments, ongoing quality monitoring, and acting on user feedback, a key safeguard will be ensuing that an employee with sufficient expertise makes any final decisions. This may well involve training and upskilling existing employees. For further details on other risks presented by AI in the workplace and how to manage them, please see our earlier article.

Employers wishing to use AI systems which involve the use of personal data to train, test and deploy the system must also comply with their data protection obligations. A data protection impact assessment (DPIA) should be undertaken for all development and deployment of AI systems that involve personal data to identify and minimise data protection risks.

Where AI is used in recruitment, to demonstrate accountability employers should be transparent with candidates in privacy notices about the use of AI and how it is applied. It is also very important to explain to candidates whether recruitment decisions are being made based on automated processing, including profiling.

While profiling can help organisations make decision more quickly and efficiently, it can lead to significant adverse effects on candidates such as perpetuating existing stereotypes and discrimination.  Individuals must have the right to opt-out of automated decision-making except where it is authorised by law, based on the individual’s explicit consent or necessary for a contract between the individual and the organisation. Except where it is authorised by law, even where an individual has consented or where it is necessary for a contract, there must be a mechanism allowing the individual to challenge the decision or have the right to require a human intervention. The ICO has developed a toolkit to provide practical support to organisations to reduce the risks to individuals’ rights and freedoms caused by their AI systems and it has also provided guidance on explaining decisions made with AI.

It is of course permissible (and arguably, sensible) for organisations to explore the use of AI in recruitment and HR to establish how this can streamline processes and support them in their operations. However, this must be done in an ethical and responsible way and by building privacy and compliance by design into the workplace systems. Failing to carry out the necessary risk assessments and put appropriate measures in place risks not only exposing the organisation to regulatory enforcement action, but also other legal claims, employee engagement issues and reputational harm.

To discuss any of the issues raised in this article, please contact Anne Todd (Commercial, Technology and Data Protection) or Robert Forsyth (Employment).

EVENTS
Top view people walking white floor
Employment Law Masterclass

As the dust settles from the recent general election, the UK's employment landscape is poised to undergo more significant change. Our team will talk you...

EVENTS
mainstream
MAINstream Pitch Event | In partnership with Hazlewoods

Applications for this pitch event close 19 September. We are pleased to announce that this pitch event will be held in Cheltenham in partnership with...

EVENTS
Technology and Innovation
Lessons from Leading Women Tech Entrepreneurs

Michelmores has partnered with Women in Telecoms & Technology (WiTT) to celebrate women entrepreneurs working in the tech sector. Delegates will have the opportunity to...