Authors
Welcome to our ‘AI in Employment Law’ series. In this collection, our Employment and Commercial teams follow the life cycle of an employment relationship to touch on some of the key opportunities and risks involved with artificial intelligence at each stage of employment. Our first article covers what employers should be considering when procuring ‘off the shelf’ AI tools, and subsequent articles will go on to consider the use of AI in the first stages of employment – i.e. attracting, recruiting and onboarding talent – the middle stages of employment – i.e. management and retention – and the final stages of employment – i.e. exits and termination of employment.
To discuss any of the issues raised in this series, please contact Anne Todd (Commercial: Technology & Data Protection) and Lynsey Blyth (Employment and Immigration).
Key considerations when acquiring “off the shelf” AI HR tools
There are various HR focussed artificial intelligence (AI) tools which aim to make life easier for employers such as tools for preparing job descriptions, CV screening, workforce management, shift scheduling and performance management. While these tools no doubt help streamline processes and increase efficiency, they come with risks which need to be understood and properly managed to protect your business and ensure that you comply with your legal obligations. It’s therefore vital that businesses conduct proper due diligence before acquiring AI tools to ensure they are using AI in a legally compliant manner.
So, if you’re looking to procure an AI tool from a technology company, what should you have in mind?
1. Configuration to UK laws: while there is currently no overarching statutory regulation of AI in the UK, you must still comply with applicable UK laws. If you’re acquiring AI tools created overseas, they are likely to have been developed to comply with the requirements of another legal system and may not be configured to comply with UK laws. Even tools created in the UK may not be compliant with all the relevant legal requirements.
For example, does a shift management tool consider the requirements for rest breaks under the UK’s Working Time Regulations? Are employees allocated shifts in a way which complies with the Equality Act 2010? Is there a risk that shifts are scheduled in a way that creates an indirect discrimination risk (e.g. if it is having a disproportionate impact on women with childcare responsibilities)?
Similarly, has the AI tool been developed in such a way that it will enable you to process personal data in a way which is compliant with your obligations under UK data protection law?
2. EU AI Act considerations: the EU AI Act has extra-territorial effect and if you have staff based in the EU, or if you may recruit staff from the EU, the requirements of the EU AI Act should also be considered. Failure to comply with the Act’s requirements can result in significant fines of up to €35 million or 7% of annual worldwide turnover. The EU AI Act bans certain categories of AI, including AI which may be used to infer employees’ emotions and AI which is purposefully manipulative or which uses deceptive techniques, exploiting individuals’ vulnerabilities. AI which categorises people based on biometric data to infer things such as race is also banned. The Act also imposes strict regulatory compliance obligations on the deployment of certain “high risk” AI tools, which include certain biometric AI tools and AI tools used for HR decisions, allocating tasks, and/or monitoring and evaluating the performance and behaviour of employees. Before deploying an AI tool, you should consider how its use may be regulated under the EU AI Act, and you should also make sure that you understand your obligations and what steps you will need to take to ensure you comply with those obligations.
3. Understanding the data and ensuring compliance with data protection law: an AI tool is only as good as its data and it’s important that you understand what data has been used to train the AI tool. Good quality data is key. Employers should ensure that they have sufficient understanding of the data used in the training of the AI tool to identify potential bias in the outputs. This needs to happen not only at the procurement stage, but throughout the lifetime of an AI tool, particularly when there have been updates to the AI tool.
Under data protection law, as a data controller, you are required to ensure that your use of AI tools complies with data protection requirements. As a preliminary measure, you will need to undertake a Data Protection Impact Assessment to ensure that you understand what personal data the AI tool processes, how, where, and by whom, that data will be processed and to assess the data protection risks presented by use of the AI tool, including whether and how those risks can be mitigated. You will also need to identify a lawful basis for processing employee personal data.
Another key requirement will be to ensure that you enter into an appropriate data processing agreement with the supplier, which is compliant with UK GDPR. You will also need to ensure you have satisfied yourself that the data will be securely protected from data breaches with the supplier taking all appropriate information security measures. If the personal data is being processed outside of the UK, you will also need to consider requirements on international transfer of data.
4. Transparency and explainability: you must also be able to explain the decisions or outputs of the AI. For example, if a candidate is rejected, you must be able to explain why. When procuring an AI tool, you will need to ensure that the vendor is able to provide you with the assurance that decision-making processes can be traced, so that clear explanations as to how that decision was made can be given to the individuals affected by the outcome.
5. Updating policies and delivering training: you should ensure that you have an AI governance framework and use policy in place which sets out clear boundaries for permitted and prohibited use. This should cover both the business’s use of AI tools, as well as staff use of AI. You should also update relevant policies to account for any AI-related amendments. For example, your equal opportunities, anti-harassment and bullying and social media policies (amongst others) will all need reviewing.
From a data protection perspective, transparency about the use of AI is key, and your privacy notices will need to be updated to disclose and explain the use of the AI tool, as will your data protection policy and electronic communication policy.
Alongside updated policies, staff should receive training on the use of AI. The training will differ depending on their role – for example, HR should receive tailored training on specific AI tools, how to use them responsibly and ethically, how not to use them, common pitfalls and risks. Training should also ensure that users are able to question and challenge AI-driven outputs.
The EU AI Act specifically requires deployers of AI tools to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI tools.
6. User beware: it goes without saying that while AI can assist you in making decisions, it should not ultimately be making those decisions for you. All outputs produced by AI will need thorough human oversight and ultimately, a human needs to take responsibility for the decision made. Staff should be made very aware of the limitations and risks with AI and should also be encouraged to challenge any unusual outputs and to report any issues they encounter. Guidance on its use should be clearly set out in your relevant policies and staff should be aware that any breach could be dealt with under your disciplinary policy.
7. Contracting with suppliers: having an appropriate contract in place with the vendor of the AI system is critical. Where you are procuring “off the shelf” AI, the providers’ standard terms will be balanced very much in their favour and they are unlikely to offer adequate legal assurance or remedies if something goes wrong or if the system does not perform as it is expected. It will be particularly important to ensure that vendors are contractually obliged to maintain records and logs of the system’s functioning, performance and use, and that they make these available to you on request. Vendors’ standard terms are unlikely to offer any guarantees regarding the data on which the system has been trained, that the AI is suitable for its intended purpose or that it will comply with applicable laws generally (let alone applicable UK laws). A review of the vendors’ terms by lawyers who understand the employment law considerations as well as data protection and AI risk and legal requirements is highly recommended.
8. Highlighting historical non-compliance: finally, it’s not unheard of for the process of rolling out a new AI tool to unearth instances of historical non-compliance with applicable law. In these circumstances it’s important you take advice to understand potential liabilities and how to manage the associated risks before the AI tool is deployed.
While AI can certainly lead to better and more robust decision making, this can only happen if appropriate safeguards are in place. While it’s important to consider these issues at the procurement stage, they should also be addressed on a rolling basis, particularly where there have been any updates to the AI tools.
We help employers procuring and implementing AI tools and offer pragmatic and time-sensitive support. This can include undertaking an audit to identify applicable employment law, data protection and EU AI Act requirements and risk areas. We can work with you to create a strategy for managing any risk identified and complying with applicable law. We can also advise on contract negotiations with vendors to ensure that you have appropriate contractual assurance. Should you wish to discuss this further, please contact Anne Todd (Commercial: Technology & Data Protection) and Lynsey Blyth (Employment and Immigration).
Print article