Authors
Welcome to our ‘AI in Employment Law’ series. In this collection, our Employment and Commercial teams follow the life cycle of an employment relationship to touch on some of the key opportunities and risks involved with AI at each stage of employment. Our first article covered what employers should be considering when procuring ‘off the shelf’ AI tools, and subsequent articles will go on to consider the use of AI in the first stages of employment – i.e. attracting, recruiting and onboarding talent – the middle stages of employment – i.e. management and retention – and the final stages of employment – i.e. exits and termination.
To discuss any of the issues raised in this series, please contact Anne Todd (Commercial: Technology & Data Protection) and Lynsey Blyth (Employment and Immigration).
AI and the Employment Life Cycle: Attracting, recruiting and onboarding talent
With ongoing reports of businesses still struggling to recruit skilled staff and HR teams expected to do more with less, it’s no surprise that organisations are looking to utilise artificial intelligence (AI) to help recruit new talent. It’s a similar story for onboarding – a time intensive and often repetitive process for HR, though one that is critical to success, given that those organisations with robust onboarding processes have a new hire retention rate that is 82% higher than their peers.[1] There’s no doubt that AI tools can add significant value and help save time and ensure consistency of approach, however they need to be used with caution.
How can you use AI during the talent acquisition stage?
1. Attracting: AI can be used to draft job adverts, identify skills gaps to facilitate targeted recruitment and conduct market analysis to identify trends.
2. Recruiting: AI can be used to conduct CV screening, schedule interviews, prepare and deliver online assessments and provide detailed analytics to support hiring decisions.
3. Onboarding: AI can be used to assist with paperwork (though be very careful if using AI to prepare employment contracts), produce personalised onboarding programmes and offer chatbots to give new staff access to 24/7 support.
What are the legal implications and risks involved?
Before deploying AI for the purposes of talent acquisition, you should consider how you will comply with your legal requirements. As well as considering requirements of employment, equalities and data protection law, the EU AI Act may also apply (for example where staff are based in the EU or where you are looking to recruit staff from the EU). Failure to address these requirements before AI is deployed, and on an on-going basis throughout the lifetime of the AI, can lead to substantial regulatory fines and legal claims being brought against you by the individuals who have suffered harm. It can also cause serious reputational damage.
Particular legal risks at the recruiting and on-boarding stage include the following:
• Bias and discrimination. You should be very careful to ensure that the data used to train the AI tool does not give rise to risk of bias and discrimination. It’s been widely reported that Amazon had to scrap its AI recruitment tool because it was discriminatory against women. That’s because the model was based on previous patterns in CVs submitted to the company, most of which came from men, and so the AI tool ended up penalising CVs which mentioned things like ‘all women college’ or ‘women’s chess champion’.
If you’re using AI to schedule interviews, you should be mindful about your duty to make reasonable adjustments for disabled applicants and ensure that this is built into any potential automation process. In the recent case of AECOM v Mallon,[2] the Employment Appeal Tribunal held that an employer does not need to know the specifics of a disabled applicant’s difficulties to be under a duty to make reasonable adjustments during the application process.
• Misuse of biometric technology. Use of biometric technology should be approached with caution and particular care needs to be taken to ensure that it does not result in bias or discrimination or a breach of data protection law. Biometric data is personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a person, which allows or confirms the unique identification of that person. For example, fingerprint scans, facial recognition data, iris scans and voice recognition data. Facial analysis technology can cause serious issues – for example, there’s evidence that facial recognition is less accurate in identifying people with darker skin tones, especially women, and can also cause issues for applicants that are neurodivergent. There’s also evidence that certain AI technology could potentially be biased against non-native English speakers. These issues present real discrimination risks.
• Failure to comply with data protection law. Under UK GDPR, when biometric data is used for the purpose of uniquely identifying a natural person, it is considered special category data for which there must be both a lawful basis for processing,[3] and a requirement to meet one of the conditions which allow for processing of special category data.[4] As consent of the person whose biometric data is being processed is rarely considered to be given freely in an employment situation, you must carefully consider what alternative conditions may apply before any biometric technology is deployed. Use of e-recruitment tools (e.g. using automated screening of job applicants to create a shortlist based on certain criteria with no human oversight) is also subject to UK GDPR. The UK’s Information Commissioner’s Office has identified automated decision making (ADM) and the use of AI in recruitment as a key focus area. When the UK’s new Data Use and Access Act comes into force this will create a more permissive regime for ADM in wider circumstances unless special category data is involved. However, care will still be required to ensure that you have identified a lawful basis for use of the ADM and that you have implemented appropriate safeguards to uphold the rights, freedoms and legitimate interests of the individuals. This is an important area to keep under watch.
• Failure to comply with the EU AI Act. For the purposes of the EU AI Act, some AI systems are banned altogether (see our earlier article here) and certain categories of biometric technology, as well as AI tools which are used for recruitment or to make employment-related decisions, are considered to be “high risk” technology. This triggers strict requirements with which employers deploying the AI, as well as the provider of the AI, must comply. Failure to comply can lead to substantial fines of up to €35 million or 7% worldwide annual turnover. If you’re based in the UK, you should not assume that you are exempt from the requirements of the EU AI Act, for example if you have business operations in the EU or if an AI tool is used to recruit staff from the EU then this will fall within scope of the EU AI Act.
• Failure to meet employment contract requirements. Particular care is needed if generative AI tools are used to prepare employment contracts. In England, employers are required to provide employees and workers with a “written statement of particulars of employment” which must contain prescribed information to be legally compliant, so employment contracts should be checked to ensure these requirements are met and that other key terms – such as notice periods, restrictive covenants, confidentiality etc. – are robust and legally enforceable. Publicly available GenAI tools such as ChatGPT and Copilot are not trained on UK employment law and any output cannot be relied upon to be legally compliant.
What safeguards do you need to put in place if using AI in this way?
While it is very tempting to jump straight in and use AI, we’d advise you consider the following:
• Is the product you are using legally compliant? Many ‘off the shelf’ AI tools, particularly those developed abroad, will almost certainly not comply with UK employment and data protection laws or the EU AI Act. This creates significant risk and those tools should not be used until you are comfortable they are compliant and won’t open you up to unforeseen liabilities. At the recruitment stage, employers should be particularly careful of discrimination risks. See our article here for more information.
• Do the individuals using these tools have appropriate training? Employees who will be permitted to use AI tools for recruitment and onboarding should have appropriate training on how it should (and should not) be used, its limitations and the rules for appropriate use, as well as general data protection training. While the AI provider may offer training, it is ultimately your responsibility as the employer deploying the AI to ensure that your staff using the tools have had appropriate training. The EU AI Act requires all deployers of AI to ensure AI literacy of all staff and other persons deploying AI on their behalf.[5]
• Do you have a policy in place to cover the use of AI at the recruitment stage? If you don’t have an AI policy in place, you should introduce one. This should cover the permitted uses of AI, what is not permitted and the rules that must be adhered to. For example, it should be mandatory that if HR is using AI to generate job adverts, a named individual in the team has responsibility for critically reviewing the draft AI-generated advert and approving it. Equally, if AI has undertaken a shortlisting exercise, this should only be used as a guide or suggestion, and someone in the HR team should have responsibility for following or ignoring (with reasons) AI’s suggestions. There should also be rules around the prohibition of inserting confidential or personal data into AI tools (unless these have been specifically approved by your Data Protection Officer). Other related policies, such as your privacy notice(s) and data protection policies will need to be updated to explain what data you collect, why you collect it, on what legal basis you process it, how you handle and store it etc.
• Have all data protection requirements been considered? Have you been transparent with candidates – do they know that AI is being used, what personal data will be used, how it will be used and who it will be shared with? Have you identified a lawful basis for processing the personal data under UK GDPR, and if you are using biometric data during the recruitment process, have you satisfied yourself that you have met the requirements of UK GDPR? Similarly, if any e-recruitment practices involving ADM and AI are being used, have you considered UK GDPR obligations – have you informed candidates and given them rights to object and request human intervention?
• Have the EU AI Act requirements been considered? If you are using AI and your organisation operates across the EU, or if you may recruit candidates from the EU, you must satisfy yourself that you have met and will continue to meet the requirements of the EU AI Act. This includes verifying that the training data meets the requirements of the Act, that there is human oversight of the AI, that the AI is used in accordance with the provider’s instructions, that records are kept and that the AI is monitored on an on-going basis with any incidents, malfunction or risks being reported.
• Are appropriate safeguards in place? Do you know what data has been used to train the AI? This is particularly important from a bias and discrimination risk perspective as well as for compliance with data protection law and the EU AI Act. Have you conducted equality impact assessments? It’s important to remember that the same technology can have different impacts on different people, particularly those sharing protected characteristics. How will you ensure that anything produced by AI will receive appropriate human oversight and scrutiny? Do you have a procedure which must be followed if HR is using AI to recruit? How often will you conduct audits of AI tools to check for bias/discrimination? Do you have processes in place to monitor the use of AI recruitment tools to identify issues and address them?
In short, there are a wide range of issues to consider at the outset before implementing AI to help with recruitment and onboarding. If a candidate raises a concern relating to the recruitment process – e.g. challenging a decision not to be given a job – a tribunal will have no sympathy for you seeking to blame AI for that decision, nor will the Information Commissioner or the relevant EU regulators for any breach of data protection law or the EU AI Act. Ultimate responsibility (and liability) lies with you, as the employer, and so AI should be used at your own risk. Understand where and how AI is currently being used in your organisation, conducting a thorough legal due diligence before procuring new AI tools, undertaking impact assessments and spending time putting appropriate safeguards, policies, guidelines and training in place and undertaking regular audits of the AI tools will mitigate the risks of using AI tools and will create the environment to enable you to take advantage of the efficiencies they bring.
We offer solutions-driven advice to clients to ensure their use of AI is robust and compliant. Should you wish to discuss this further, please contact Anne Todd (Commercial: Technology & Data Protection) and Lynsey Blyth (Employment & Immigration).
To access the other articles in this series, please see below:
Key considerations when acquiring “off the shelf” AI HR tools
[1] According to Brandon Hall Group research.
[2] AECOM Ltd v Mr C Mallon [2023] EAT 104
[3] Article 6 UK GDPR
[4] Article 9 UK GDPR
[5] Article 4 EU AI Act
Print article