Artificial intelligence: Risks in recruitment


The sophistication of ChatGPT (an AI chatbot) following its launch in November last year was a shock for many. As AI becomes cleverer and its use becomes increasingly prevalent in our day-to-day lives, more and more employers are deciding to utilise it to improve the efficiency of their recruitment processes. This technology initially grew in popularity in the United States, but UK employers are now becoming interested in the potential time and costs that AI could save them in their recruitment processes. However, the popularity and widespread use of these AI tools should not fool employers into thinking they are without risk. For employers planning to use, or already using, AI in their recruitment processes we discuss below some important considerations.

Examples of AI use in recruitment

The current use cases for AI in the recruitment process include the following:

  • Targeted advertising of roles: Social media platforms can use algorithms to show job advertisements to individuals with particular characteristics (in the same way that product advertising will be targeted at particular demographics).
  • CV screening: AI systems can be trained to identify an “ideal” candidate based on the content and style of a CV.
  • Social media screening of candidates: Background checks have been carried out by employers using AI to search the social media accounts of candidates.
  • Analysis of video interviews: Interviewees’ facial expressions and body language are compared to those deemed to be high performers in order to identify the strongest candidates.

“Profiling”, which is the automated processing of personal data to evaluate certain things about an individual, has already been identified as a high risk data use and is therefore subject to safeguards under the GDPR. Leaving the data protection risks to one side, relying on AI to make recruitment decisions could lead to discrimination claims from unsuccessful applicants.

AI was initially touted as the solution to counter the human biases embedded in the recruitment process. It was hoped that by taking an objective approach based only on relevant data sets (and ignoring other data, e.g. gender or nationality) that employers would be able to identify the candidate who would best perform the relevant role. However, this has turned out to be less straightforward. Humans have inherent biases which will inevitably be reflected in the data used to train the AI. If historic bias has led to certain demographics being preferred for job offers or promotions then the AI system will naturally learn to prefer candidates that fit those demographics and perpetuate the bias against less represented demographics.

For example, Amazon created a model that analysed applicants’ CVs but found that it had a bias towards men over women when it came to filling its technical roles as a result of historical male dominance in the industry. The system therefore had a bias against selecting CVs that referenced the word “women’s”, e.g. “women’s college” or “women’s football team”.

Arguably, the potential for discriminatory outcomes from decisions made by AI is lower, or at least no higher, than exists for human decisions. However, the challenge for employers comes from the fact that they likely do not understand how the AI works, and so will be unaware of any embedded bias and unable to explain the decision-making process to candidates if questioned. In other words, a manager would struggle to explain to an Employment Tribunal the rationale of a hiring decision made by an AI system, which would inevitably lead a Tribunal Judge to probably draw adverse inferences. Aside from the discrimination risk, there is the commercial risk for employers of missing out on top candidates as a result of the AI system drawing irrational conclusions from data as a result of its lack of human judgement.


Although the AI recruitment market is growing rapidly, most employers do not understand how the technology works. Further, the majority of AI recruitment software originates from the United States and so while it may be compliant with US equality laws, it will not have been tested for compliance with the law of the UK or continental Europe. Until the technology has been properly scrutinised we would suggest employers exercise caution in relying on AI to make recruitment decisions.

However, for employers who deem the commercial benefits to outweigh the risks, we advise carrying out a thorough due diligence process; employers should ask questions of potential providers to understand how their systems have been trained to eliminate bias in particular in the context of UK discrimination laws. Employers should also continue to monitor the output of their selected AI system for any potentially discriminatory patterns.

Future regulation

At the same time as many employers do not understand how their AI recruitment tools work, nor do potential employees. So whilst the above risks exist, job applicants face an uphill battle in proving that a decision made by AI was discriminatory, not least because they may not even know that a potential employer has used AI in its recruitment process. The EU’s proposed regulations laying down harmonised rules on artificial intelligence (“AI Act”) may change that.

The current draft of the proposed regulations distinguishes between three types of AI system: (i) prohibited AI systems; (ii) those that are high risk; and (iii) non-high risk systems. The proposed wording of the AI Act defines the following as high risk: “AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests”. This means that providers and users of such AI systems will be subject to significant compliance obligations including in relation to the provision of information and transparency. This will mean that job applicants will be better informed about the AI systems used in a recruitment process and therefore in a better position to bring a potential discrimination claim.

Whilst UK employers will not be directly affected by the AI Act (details of any UK AI regulation are yet to be determined but the UK’s approach is set to be more flexible and “pro-innovation” than the EU’s) many international employers will opt to take a global approach and comply with the standards and obligations set out in the AI Act worldwide. Another consideration for employers is that once the AI Act comes into effect it will cause significant disruption to the AI recruitment industry. It is likely that many of the tech start-ups currently operating in this space will not have the capacity to comply with the significant obligations placed on them by the Act and so many will cease operating or be forced to increase costs.

For a discussion on AI in the context of the ongoing employment relationship, see here for our thoughts on its uses in employee performance assessment and work allocation.

What a time to be (AI)live!

On 25 April 2023, we welcomed a panel of industry experts to explore the commercial, legal and ethical questions prompted by the rapid rise of Generative AI.

Read a summary, and watch the recordings, of the event here.