The purported benefits of AI, as well as the new challenges facing employers as a result of remote working, have led to many contemplating how they may be able to use the technology to monitor employee performance and allocate work. We discuss below the challenges associated with using AI in these ways, as well as mitigation strategies and future considerations for employers in light of forthcoming regulations. For those interested in utilising AI during the recruitment process, see here for our discussion on some key considerations for employers.
Examples of AI use in performance management and task allocation
Performance management: AI can be used to assess employees’ productivity, for example, by monitoring the time an employee spends doing certain tasks, e.g. writing an email, using particular applications or reading articles such as this one. More sophisticated AI could be implemented to track an employee’s eye movements or monitor their body language. That data could then be used to make promotion or redundancy decisions or even to quantify the likelihood of an employee leaving the company by comparing current engagement and work output data to their previous benchmarks.
Task allocation: In the same vein, AI could be used to automatically assign purportedly higher-performing employees new work or more favourable shifts, or to allocate employees with greater capacity additional tasks to enhance staff efficiency and output.
Discrimination: The prospect of utilising AI in employment to circumvent human biases is understandably an attractive one to employers. However, employers must remember that the algorithms driving AI are created by humans and trained on historic data which has been impacted by historic human bias. As such, AI-based tools will inevitably remain subject to at least some of the prejudices of their creators.
A recent decision in Italy against Deliveroo provides a useful example of the risks associated with using AI to allocate work. The AI tool adopted by Deliveroo ranked its riders based on reliability and availability to work at peak times, such that riders with a higher score were given earlier access to Deliveroo’s driving slot booking system. The court concluded that the algorithm was indirectly discriminating against women, as they would be more likely to cancel at the last minute or be unavailable during peak times due to caring responsibilities.
Implied duty of trust and confidence: Employers have a duty to treat their employees in a way that will not damage the relationship of trust and confidence between them; the use of AI performance monitoring tools may lead to a breach of this duty, in particular where such use leads to employers making decisions based solely on an AI-generated recommendation without human intervention or scrutiny.
Employee’s right to privacy: The upsurge in remote working in recent years will continue to entice employers to implement AI-driven technology that can monitor employees working from home. Such technology presents risks to employees’ right to privacy under Article 8 of the European Convention on Human Rights and the Human Rights Act 1998. Employers should be aware that some AI monitoring tools are intrusive and risk being found contrary to the principles of Article 8; if an employee were to bring a claim on such basis, the employer would need to be able to justify that the tool’s interference with employees’ privacy rights under Article 8 is: (i) in accordance with law; (ii) pursuing a legitimate aim; and (iii) necessary in a democratic society.
Similar to our recommendations in our previous recruitment article, we suggest employers avoid placing imprudent reliance on AI tools for performance management and task allocation until such a time as the technology has been fully and properly scrutinised for its compliance with UK laws.
Employers that do utilise AI in their performance management or task allocation processes must take adequate steps to understand how the AI operates and the historic data used to train its algorithms so that it is equipped to defend against potential discrimination or unfair dismissal claims if, for example, the employer follows the AI tool’s recommendation to dismiss an “underperforming” employee.
In order to maintain a relationship of trust and confidence, employers must be transparent and communicate with employees about the purpose, implementation, and function of any AI tools it uses for performance management or work allocation. Once an AI system has been implemented, employers should routinely review the output and performance of the technology to check for potential bias or erroneous decision-making that could develop over the lifetime of the product. Where possible, we also suggest employers provide specialised training for managers who will be using AI to inform their decision-making.
As outlined in our previous article, the EU’s proposed regulations (AI Act) identify the use of AI in recruitment as “high risk” and its use is therefore subject to more rigorous regulation. As drafted, the AI Act also defines as high risk AI systems “intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships”.
Whilst UK employers will not be bound by the AI Act, international employers will likely opt to abide by the rules stipulated in the AI Act worldwide. Such employers should therefore be alive to their obligations under the AI Act, in particular when considering whether to implement any new AI systems. For employers that already utilise AI tools for performance management and task allocation, it would be prudent to review the functionality of such systems and discuss the provider’s proposed steps to ensure future compliance with its obligations under the AI Act.
What a time to be (AI)live!
On 25 April 2023, we welcomed a panel of industry experts to explore the commercial, legal and ethical questions prompted by the rapid rise of Generative AI.
Read a summary, and watch the recordings, of the event here.