Technology and Innovation

Artificial Intelligence in the workplace – managing legal risk to maximise opportunities

This article provides an overview of some of the potential legal considerations and risks for employers to contemplate when seeking to deploy artificial intelligence in the workplace. It mainly focuses on employment law considerations, but also touches on other key issues such as data protection and generative artificial intelligence.

Most organisations already use artificial intelligence (AI) in some form or another, including to support workplace and HR functions – whether it’s using email spam filters, having auto-complete functions, or using preliminary CV scanning at the recruitment stage. Recent technological advancements, and the ease of access to generative AI (GenAI), is accelerating the pace of adoption of AI and new AI-based products, and there are a whole host of attractive AI solutions available to improve workplace efficiencies.

This note does not intend to cover all aspects of AI and the associated legal risks, but it aims to give you a brief overview of some of the potential legal considerations and risks for employers seeking to deploy AI in the workplace. The requirements of each organisation will vary, and organisations may also be subject to sector regulatory requirements, so in this note we have reflected on some of the core considerations, which will be common to all organisations.

What laws regulate AI at work, and will new laws be introduced?

There is currently no specific UK legislation regulating the use of AI generally, nor the use of AI in the workplace. The House of Commons Library recently published a report which reviews the interaction between AI and the current employment legislation, which can be accessed here. Existing laws relating to data protection, equality and common law principles such as the mutual duty of trust and confidence, will all be relevant when considering how AI can be used at work, but there is no single piece of legislation governing its use.

In light of the UK Government’s White Paper, released in March 2023, it is unlikely that any new UK legislation will be introduced on this topic in the immediate future. Instead, the government will support existing sector regulators to control these technologies in their sector using a principles-based approach to AI regulation, focussing on the following guiding principles (which will not be statutory): safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

As such, it currently falls to existing legislation to govern the use of AI in the workplace. Given AI has the potential to touch on almost every aspect of the employment relationship, it’s important that organisations are familiar with the current legal landscape and how this interacts with AI at work, to ensure they can maximise opportunities and mitigate risk. Organisations with a workforce based outside of the UK will also need to consider the legal landscape in each of the different countries of operation.

What legal risks do we need to consider if we wish to introduce AI into the workplace, and how do we manage such risks?

There is no doubt that AI can be extremely positive and help businesses operate more efficiently, but there are a number of relevant risks to consider. We’ve highlighted a few of the key ones below.

Discrimination

Perhaps the most commonly reported criticism of AI, in an employment context, is the discrimination risk. It’s widely acknowledged that AI tools can exhibit biases – this could be because the training data an AI system learns from might be unbalanced (i.e. if the majority of the data has come from one predominant group this could adversely impact groups who are not as well represented) or even reflect past discrimination. Discrimination may also arise from the way in which AI is used or because an employee hasn’t reviewed the output sufficiently.

There’s the potential for AI to be discriminatory on the basis of any protected characteristic. For example, it’s been widely reported that in 2014, Amazon developed a recruitment tool, which aimed to screen CVs to identify the strongest candidates, but it had to be shut down because it discriminated against female candidates.[1] Indeed, Uber Eats’ use of facial recognition technology to verify the identity of workers when they log onto a shift has been the subject of allegations of race discrimination, after concerns were raised that the system didn’t work fairly for people of colour and failed to recognise them.  Furthermore, a recent study by University of East Anglia, suggested that the ChatGPT model (built by OpenAI using large language models), had a ‘significant and systemic’ tendency to return politically left-wing responses when questioned.[2]

In terms of liability, employers can be vicariously liable for the actions of their employees in certain circumstances – this would extend to the use of AI. Further, if an employer adopts AI systems which are inherently biased or discriminatory, they will be liable for any resulting discrimination.

It’s important to evaluate the potential discriminatory effects of any AI tool you are considering using before introducing it. The use of AI may well amount to a ‘provision, criterion or practice’, which, if it has a disproportionate impact on someone with a protected characteristic, could result in indirect discrimination claims. Further, if there’s a blanket use of AI in disability cases, there may well be claims for failure to make reasonable adjustments and/or discrimination arising from disability.

It’s therefore crucial that an organisation understands how the AI model operates and the intended outcomes before it is introduced; it is also important to ensure that the data sets which will be used to train the AI are verified and biased data has been removed.

Contracts with AI providers should specify the requirements for the training data and for testing the AI to address bias before it is deployed.  They should also set out the steps the supplier will take if, after deployment, it becomes apparent that the AI displays bias or provides an output which causes discrimination.

Appropriate training on how the AI system works and how to interpret the output should be provided to any individual who is going to use it. We’d also recommend that you introduce a policy which governs acceptable use of AI: setting out clear guidelines and expectations should help mitigate risk. For more information on introducing a Generative AI policy, see our recent article. Finally, it goes without saying that any decision produced by AI must be subject to careful human review by an individual with appropriate skills and qualifications.

Lack of transparency

Transparency is a key AI guiding principle as well as one of the seven GDPR data protection principles (see below). Reliance on AI can result in a lack of transparency. The transparency ‘void’ makes it harder for employers to justify decisions and harder for employees to challenge those decisions. Indeed, in some cases, employees may not even be aware that AI has been used in their employer’s decision-making process.

If an employer is using AI in the workplace, they need to be clear and transparent about what AI they are using, when they are using it, for what purpose they are using it and the impact of its use on their employees. There must be a legitimate reason for the use of AI, and it must be proportionate. In addition to the data protection risk, failing to tell employees about the use of AI in a way which impacts them undermines the mutual duty of trust and confidence between an employer and employee and is likely to lead to significant employee engagement issues. Indeed, in serious cases, it could lead to a fundamental breach of contract resulting in an employee resigning and claiming constructive dismissal.

Once AI has been introduced, an employer will also need to be able to justify any decision which has been made been made in reliance on AI. This will inevitably involve the employer (i.e. the individuals who will be making decisions on behalf of the business) having a sufficient understanding of the AI model to explain the decision. If a manager is not able to understand how the AI operates, they are not going to be able to explain their decisions, nor demonstrate that the decision was rational, fair and made in good faith. Again, this feeds into the need for mutual trust and confidence in the employment relationship.

To reduce the risk of unfair decisions, an individual with appropriate skill and expertise should always have the final oversight and say on any decision. It goes without saying that the individual should also have training on how the AI works, and how to interpret the outcome data. Being unable to explain or justify a decision is not only going to damage the relationship with an employee, it may also lead an employee to believe that a decision has been for other reasons (which may be discriminatory or otherwise legally unfair) in the absence of a rational explanation.

In practice you will need to ensure that there is a user manual for managers, which is kept updated as the model develops. It will also be important to ensure that a record-keeping repository is maintained to help explain decisions if they are subsequently challenged.

Changes to the workforce

The introduction of AI has the potential to fundamentally change the way the workforce operates.

In some cases, AI might lead to roles being eliminated and a redundancy process ensuing. On the other hand, it could lead to a recruitment drive with new jobs being created or staff being upskilled. It may also lead to roles being amended over time.

It’s therefore not out of the question that restructures and reorganisations may be needed as AI technologies become embedded in organisations. This will need to be handled fairly and sensitively, and meaningful consultation will need to be undertaken to reduce legal risk.  Again, engaging with staff (and unions if relevant) and being transparent about the introduction of AI should help tease out any issues and help balance interests.

Impact on staff wellbeing

There are legitimate concerns about the potential for AI to have a detrimental impact on employee health. There is no doubt that the introduction of AI can be positive and aid employees in their roles, particularly by making processes more efficient and reducing the administrative burden. However, there are concerns that if every administrative aspect of a role is removed, work could intensify in a way which may risk mental or physical health. Further, if employees are constantly monitored, tracked and micro-managed by technology, this can lead to increased stress and have a negative impact on mental health. This could become even more pronounced if technology trackers are used on portable devices, which could severely impact work-life balance. There is also the widely publicised concern that the widescale introduction of AI could start to render some human held jobs obsolete, which of course, leads and has already led to some fearing for their livelihood.

Given the focus on mental health at work and employee wellbeing, this will need to be handled sensitively by employers. Again, engaging with staff (and unions if relevant) and being transparent about the introduction of AI should help tease out any issues and help balance interests.

Does AI present any particular data protection issues?

Most AI used in relation to the workforce involves the processing of a significant volume of personal data, including special categories of personal data.[3]  Organisations seeking to deploy AI should therefore pay careful attention to data protection requirements well in advance of deploying AI technology. The seven GDPR principles (lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality (security); and accountability) apply whenever personal data is processed and these principles overlap with many of the AI guiding principles.

The processing of personal data by AI is generally considered to present a higher data protection risk, and the use of AI systems which process special category data and those involving foundation models,[4] can present even greater data protection risks so careful analysis of the risks should be carried out before deployment. The ICO recommends that a data protection impact assessment (DPIA) should be carried out whenever an AI system involving the processing of personal data is deployed. A DPIA will enable you to assess the risks and how those risks can be mitigated. The Information Commissioner has produced a helpful guidance and a toolkit[5] to help organisations seeking to deploy AI.

Each AI system will present different levels of data protection risk, DPIAs should be carried out for each one and revisited on a regular basis throughout the lifetime of the AI system as the risks to data protection may evolve over time. You should consider how the operation of the AI system can be suspended if it transpires that there is a risk of or actual breach of data protection.

Legitimate Interest Assessments are also of critical importance to ensure that there is a lawful basis for the processing of the employees’ personal data and, as is widely acknowledged, there can be risks in relying on consent given the power imbalance in the employer-employee relationship.

Consistent with the AI transparency requirements, information needs to be given to your employees about the processing of their data so you will need to consider what updates may be required to employee policies and manuals as well as whether any further notices are required at the point of the collection of data.

Additional rules apply where an employer carries out solely ‘automated decision-making’ or ‘profiling’ that has legal or similarly significant effect on them.[6]  Under GDPR, solely automated decision making is generally prohibited, save in limited circumstances.[7] This topic is outside the scope of this note and we recommend that advice should be taken in any circumstances where there may be automated decision-making.

AI systems which involve biometric data, such as facial recognition systems present further risks and require additional consideration. The ICO has issued draft guidance on the use of biometric data.[8] For further information about the risks involved please also see our article on use of biometric technology.

In view of the complexities and inherent data protection risks in the context of using AI in the workplace we recommend that your Data Protection Officer should be involved from an early stage.

Does Generative AI present other risks?

GenAI, such as ChatGPT and Google’s Bard, which can be used to generate content and provide ‘human like’ answers to complex situations, presents many opportunities and potential efficiencies, which organisations will be keen to explore. However, due to the manner in which GenAI operates by inferences or predictions and the immense quantities of data involved in the building the models which underlie GenAI, the legal risks are even further amplified.  Very careful consideration of the risks, mitigations and safeguards should be conducted before GenAI is deployed in connection with HR functions.

We will comment further on the subject of foundation models, large language models and GenAI in future but in the meantime, for more information about introducing a Generative AI policy, you can read our article.

For advice on the data protection implications of AI please contact our Data Protection & Privacy team and for advice on the particular issues to take into consideration when contracting for the procurement of AI please contact our Technology & Innovation team.

If you’d like to discuss any of the employment/HR related issues raised in this note, please contact Lynsey Blyth.

This article is for general information only and does not, and is not intended to, amount to legal advice and should not be relied upon as such. If you have any questions relating to your particular circumstances, you should seek independent legal advice

[1] The AI algorithms were trained to observe patterns in the CVs submitted to the company over a 10-year period. As most of the CVs came from men, the algorithm spotted male preference and then discriminated against female candidates, penalising CVs which included the word ‘women’s (such as ‘women’s chess club captain’).

[2] The findings were published on 17 August 2023 in the journal Public Choice and were based on research carried out in Spring 2023, using version 3.5 of ChatGPT – more details of the study can be found here.

[3] Personal data revealing racial or ethnic origin; political opinions; religious or philosophical beliefs; trade union membership; genetic data; biometric data (where used for identification of a person); data concerning health; data concerning a person’s sex life; and data concerning a person’s sexual orientation.

[4] Foundation models and large language models are AI’s trained on huge quantities of data and are often used as the base for building generative AI.

[5] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

[6] ‘Automated decision making’ is the making of a decision, about an individual, based solely on automated means without any human involvement and ‘profiling’ is any form of automated processing of personal data to evaluate certain things about an individual.

[7] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/individual-rights/rights-related-to-automated-decision-making-including-profiling/

[8] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/guidance-on-biometric-data/data-protection-requirements-when-using-biometric-data/#doweneed2

EVENTS
mainstream
MAINstream Pitch Event

Applications for this pitch event have closed.  If you are interested in joining the network and attending our events please email mainstream@michelmores.com for further details. We hold...

EVENTS
mainstream
MAINstream Pitch Event

If you are interested in joining the network and attending our events please email mainstream@michelmores.com for further details. We hold five pitch events a year where high-growth early-stage...

EVENTS
mainstream
MAINstream Pitch Event

If you are interested in joining the network and attending our events please email mainstream@michelmores.com for further details. We hold five pitch events a year where high-growth early-stage...