The AI revolution in the workplace is now truly upon us. As more and more companies begin to adopt and implement generative AI tools to enhance their businesses, we examine some of the key employment considerations employers should be mindful of when allowing or encouraging employees to use such tools at work.
The use of AI technologies for the recruitment and management of staff is clearly on the rise. A recent survey found that 75% of companies already use AI tools and technology for hiring and other HR purposes and this trend is set to continue. The latest commentary has focussed very much on the protection of employees from the risks of bias and discrimination associated with the use of such technologies by HR teams and this area is likely to see further regulation. Certainly the AI Act which is due to be implemented within the EU by the end of 2026 will ensure that the “high risk” use of such technologies within employment contexts requires, amongst other things, a degree of human oversight.
However, the dawn of the ChatGPT era has also accelerated the introduction of generative AI tools (definition: a type of artificial intelligence system capable of generating text, images, or other media in response to prompts”) in the workplace more generally, which presents yet another set of legal risks and challenges from a HR perspective. It seems unlikely that we will see substantive changes to employment legislation to address this change which clearly has the potential to revolutionise the way in which a number of employees work. It is therefore important to consider at any early stage how the existing framework of employment laws and practice will apply to the use of such technologies in the workplace.
What are the key employment issues that may arise?
1. Job reduction
One of the most common concerns employees have in relation to generative AI tools is the risk that these will lead to a reduction in certain job roles. In other words that these tools will in time displace the need for human interaction based on factors such as price and convenience, especially as the quality of the outputs produced by such tools continues to improve. There has already been data to show that some of the recent reductions in force have been driven, at least in part, by the current or anticipated use of AI tools in the workplace.
We’ve also started to see employees engaging in collective action in response to these concerns. Earlier this month in the US, Hollywood screenwriters went on strike seeking as one of their demands the exclusion of AI tools as core writing software. The Writer’s Guild of America (WGA) wanted to limit the use of generative AI in writer’s rooms and make sure that any language generation models are used as a tool, rather than to create a final product, or even a developmental product. One writer and comic tweeted that those on strike were “fighting for nothing less than the survival of writing as a viable career“.
The extent to which the introduction of generative AI in the workplace constitutes lawful grounds for dismissal will vary from jurisdiction to jurisdiction depending on the definition and threshold for establishing a genuine redundancy situation. We are certainly likely to see employee representative bodies, trade unions and works councils take a keen interest in the introduction of any technologies that are aimed at replacing existing jobs in their entirety.
2. Changes to job roles
Beyond a wholesale reduction in the workforce, the more gradual impact of generative AI tools on employees’ day to day roles also needs to be considered. The increasing engagement with such tools at work will inevitably impact the tasks to be performed by employees on a daily basis. This might range from a basic upskilling where the employee’s role remains the same but is supplemented by the use of certain AI tools adopted by the business, but it could also be a more radical, substantive change to the original role for which the employee was hired.
Given the sophistication of the generative AI tools now offered, you can certainly foresee a situation where an employee’s job role changes over time, for example, from producing certain creative content themselves to curating relevant inputs and simply reviewing the outputs produced by particular AI tools. As with any substantial change to an employee’s duties, it would need to be considered whether the change is permitted without the express consent of the employee and whether employee consultation would be required to mitigate against claims of constructive unfair dismissal or the equivalent. This is likely to depend on the degree to which the employee’s role is changing and the extent to which the employer can rely on their implied consent to the change, although this is unlikely to possible across all jurisdictions.
3. Performance management
As generative AI tools begin to be introduced in the workplace, the question will certainly arise: what does a good performer look like? Is it no longer the most creative or productive employee but instead the employee that can curate the best output from a particular AI tool or tools? If that is the case, have all employees been given the training to allow them to develop such skills?
Where employees’ roles continue to evolve and change alongside the adoption of generative AI in the workplace, it will be increasingly difficult to measure their performance in the absence of clear guidance and objectives. For businesses embracing the use such tools, this is likely to be a particular challenge. Where an employee is no longer performing the role for which they were hired, “good” performance will need to be more clearly defined. A willing employee that has embraced the evolution of their role might not put up much resistance to this, but it is conceivable that another employee who is perhaps struggling with the new technologies is likely to insist that their performance should be measured against the role/ job description for which they were actually hired.
From an employer’s perspective, the communication of role expectations and performance objectives will be critical in relation to employees that are being asked to embrace the use of AI tools as part of their day to day duties. It will be important that they know at every stage what is expected from them and that it is made clear where this departs substantively from their current role and duties, which (as above) may require a degree of consultation with them.
4. Misuse by employees
The potential for misuse by employees of the generative AI tools that are made available to them is also something that needs to be looked at closely. Given the consequences that may be faced by businesses adopting such technologies, both in terms of financial liabilities and/or reputational damage, it is essential that the permissible and impermissible use cases are clearly defined and communicated to employees at the outset. Not only does this help to mitigate risks relating to the infringement of IP rights, breaches of confidentiality and possible data privacy violations (which are all likely to arise as a result of actions or behaviours of a single employee or group of employees) but it also helps to emphasise where the line will be drawn with respect to employee accountability for the use of such tools.
This is likely to remain somewhat of a grey area as AI technologies continue to be unpredictable due their complexity and lack of human oversight. While generative AI tools are being constantly improved and developed, there remains an inherent risk that they might exhibit unexpected behaviours or make decisions with unforeseen consequences. It is unlikely that an employer will be able to fairly or lawfully hold its employees accountable for any and all inaccurate or biased outputs that may be generated by a tool that the company itself has authorised for use.
While some unauthorised use cases may be obvious (take for example the editor of a German magazine that published an AI-generated ‘interview’ with Michael Schumacher) others may not be as clear cut. Where will the responsibility lie in a permitted use case where the inputs were authorised by the business but the tool nonetheless generated an inaccurate or biased output? While the onus to review the outputs can in principle be placed on the employee, it is unlikely that they will pick up each and every such instance, especially where the tool is available to junior and senior employees alike, and this continues to be one of the key risks associated with encouraging and sanctioning the use of generative AI tools at work.
5. Discrimination and bias
Much has been reported about the prospect of AI systems inadvertently perpetuating or amplifying societal biases due to biased training data or algorithmic design and there have been a number of prominent examples of this.
So what is the position if an employee uses a company-sanctioned AI tool that generates a discriminatory output affecting customers and/or other employees? There will be a number of questions to ask: Was it a permitted use case? Was it due to the employee’s input into the tool? Was it due to a bias inherent in the tool itself? Did the employee fail to review the output with sufficient care and attention? Was the issue reported and did the company take action to review the use of the tool?
In the UK, as in many other jurisdictions, employers can be vicariously liable for discrimination perpetrated by their employees unless they can show they took all reasonable steps to prevent this. This is where training for employees on how to use AI tools is likely to be very important in terms of mitigating risk and ensuring staff are clear on permissible and impermissible uses. In addition, for certain tools which require a company account or similar, it may also be appropriate to monitor the use cases centrally so as to be able to later establish the cause of a biased or discriminatory output. It is worth bearing in mind that the agreed position with respect to contractual liability as between the business and any AI provider is unlikely to prevent claims of discrimination being brought by employees or customers even if the business may ultimately have separate recourse against the provider under the terms of the relevant underlying contract.
6. Workplace activism and reporting channels
It’s no secret that workplace activism continues to be a prominent feature of the HR landscape. Employee expectations of their employer with respect company culture, values and ethics remain high and the importance of this from a talent attraction and retention perspective can’t be overstated.
We can expect this collective action will extend to the ethical considerations of adopting generative AI tools in the workplace. Aside from concerns about job reduction, employees are likely to have concerns as to transparency, bias, fairness, equality and autonomy, amongst other issues. It is therefore key to ensure that the business has undertaken an open and transparent ethics assessment before introducing any perceived “high risk” tools to the broader employee population as employees are likely to hold corporate stakeholders accountable for this.
As part of a comprehensive ethics strategy, it is also prudent for employers to establish clear reporting lines and responsible persons for any concerns that employees do have with respect to the use of and/or outputs generated by generative AI tools. From an employment perspective, it is helpful to ensure informal and regular reporting to mitigate against the risk of employees using more public or formal channels to air their concerns, which is likely to given rise to greater reputational and legal risk.
7. Employee information and consultation
A final word on employee information and consultation. This is something which is naturally jurisdiction specific but where an employer is considering substantial changes to the ways in which employees work, perhaps with the introduction of new technologies and/or policies to address the same, it is possible that information and consultation with employees or their representative bodies may be legally required. This is something that should be considered with local counsel ahead of rolling out employee-facing AI guidelines or policies as there may be additional obligations towards employees or their representatives in this regard.
How can you address these in an employee-facing generative AI policy?
- Are employees clear on the permitted use cases? Determine how much, if at all, to allow or encourage employees to use generative AI tools that are accessible to them to perform their daily job functions. Employers should give thought to the potential opportunities and risks of using such tools and make clear the permitted use cases on a role-specific basis (where possible). Consider also highlighting expressly any prohibited use cases which may include high risk situations and/or personal use.
- Which particular tools are employees permitted to use? Full autonomy is likely to be problematic as free AI tools are now being developed at speed and without consistent monitoring of output quality. Consider prohibiting employees from using high risk technologies and only permitting them to use tools that have been carefully vetted.
- Will training be provided? If allowing employees to make use of generative AI tools, consider providing training on how such tools should be used and outlining any restrictions or limitations. Emphasise the importance of responsible use, as well as transparency, safeguarding of confidential information and accountability.
- What level of supervision is required? Will the use of certain tools be monitored? Consider what level of autonomy employees should be given and whether this should vary depending on their seniority and capacity to review the relevant outputs. Confirm whether the use of certain tools or particular use cases should be supervised and, if so, by whom. Outline any monitoring of the tools available and be transparent about how this will be carried out.
- How and to whom should employees report any concerns regarding bias and /or inaccuracy? Consider establishing separate reporting procedures for concerns arising from AI outputs to avoid these being blurred with more formal channels such as whistleblowing hotlines. Encourage employees to report concerns early on and informally to designated responsible persons so that these can be resolved in a timely manner and the policy adjusted accordingly.
- What are the consequences of unauthorised use or misuse of AI tools by employees? Make clear the consequences of misusing or using in an unauthorised way the tools made available. Be express that any breach of the policy may give rise to disciplinary action up to and including dismissal. Remind employees that other employee policies such as data security, information technology and disciplinary procedures will continue to apply as normal. Place the onus on employees to check with their manager where they are unsure whether a particular use is authorised or not.
For further information, please contact:
Furat Ashraf, Partner, Bird & Bird
furat.ashraf@twobirds.com