On 29 March 2023, the UK Government published a white paper detailing it’s ‘pro-innovation approach to AI regulation’ (the ‘White Paper’).
The White Paper will be of interest to anyone developing, using or considering using AI tools in the near future – not least because AI use is ubiquitous and therefore impacts all industries.
We will be responding to the White Paper ahead of the consultation window closing. We are therefore interested in hearing your comments and concerns on the Government’s proposed approach.
Whilst the EU has so far taken the lead with new regulation to try to deal with the challenges AI presents (we are expecting the EU AI Act and related legislation to be published in final form in the very near future), the UK confirmed that it intends to take an entirely different approach.
Scope of the framework
The UK Government has stressed that its objective is to regulate the use of AI technology, rather than AI technology itself.
The AI framework is intended to be proportionate and to drive growth and prosperity, increase public trust in AI and strengthen the UK’s position as a global leader in the field.
Definition of AI
Whilst the EU has sought to define AI so as to then classify it into different risk-categories, the White Paper defines it solely by reference to two characteristics: autonomy and adaptability.
The UK Government has stressed that its objective is to regulate the use of AI technology, rather than AI technology itself.
In doing so, the Government is hoping to ensure that its AI framework is futureproof and capable of encompassing technologies we do not yet know of that exhibit those same characteristics. In addition, the Government has confirmed that it does not intend to define specific rules for facial recognition or Large Language Models (LLMs)
Given that the EU has had to consider several iterations of its definition of AI over the past few years, it will be interesting to monitor which of the two approaches will be most effective in providing certainty to businesses, users and regulators whilst, at the same time, ensuring that innovation is not stifled or that certain AI technologies aren’t left out.
Principles and next steps for regulators
The key point that emerges from the White Paper is that the Government does not intend to introduce new legislation or indeed new regulators (but doesn’t go as far as excluding this as a possibility later down the line).
Instead, the UK’s AI framework will be based around the following key principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress. These are in line with general responsible AI principles that we have seen outlined by various institutions.
Importantly, the principles will not be statutory. Instead, they will be implemented by existing regulators in a non-statutory way, although at a later stage the Government may introduce a new duty requiring them to have due regard to the principles.
Regulators will therefore lead the implementation of the framework by issuing guidance on best practice. We have already seen the Information Commissioner’s Office (ICO) issue specific guidance on AI. The White Paper envisages regulators issuing joint-guidance particularly in relation to areas such as e-recruiting that give rise to data protection, privacy, employment and human rights concerns.
Central government will be expected to deliver functions, such as monitoring and cross-sectoral risk assessments, in order to ensure regulatory coherence. It is possible that these central risk functions may be delivered by an independent body longer term.
Sandboxes
In an attempt to build on the success of the digital sandboxes piloted by the ICO and Financial Conduct Authority (FCA), the White Paper currently envisages a sandbox scheme for a single sector, with multiple regulators involved. The aim of the sandbox is to consider what regulatory issues may arise from the use and deployment of an AI product/tool and how effective regulatory measures might be in addressing these.
Businesses developing particularly innovative products – depending on the sector the government decides to focus on – may well be interested in applying in due course.
Territorial application
The AI framework will apply to all of the UK and crucially will not change the territorial applicability of existing legislation relevant to AI (which, for example, includes the GDPR / DPA 2018 and the Equalities Act 2010)
This is in contrast with the extra-territorial application of the EU AI Act which is concerned with whether or not outputs produced by an AI system are used in the EU / affect subjects within the EU, regardless of where the company developing and selling the tool is based.
What does this mean for companies working in the AI lifecycle? Is decoupling from the EU possible?
In its White Paper, the Government acknowledged that AI is currently regulated by a patchwork of legislation and that, without cross-collaboration between regulators, there is a risk of conflicting or uncoordinated regulations which may put undue burdens on businesses or leave important gaps.
…businesses may find themselves having to keep on top of multiple best-practice guidance notes in addition to ensuring compliance with existing legislation (e.g. data protection)
The Government’s flexible and principle-based approach may – prima facie – be welcomed by businesses who will have specific guidance issued by regulators with knowledge and expertise of the sector they are regulating.
However, depending on the volume of guidance issued by regulators and on whether or not they cross-collaborate, businesses may find themselves having to keep on top of multiple best-practice guidance notes in addition to ensuring compliance with existing legislation (e.g. data protection). This may cause additional burdens in circumstances where there may be no single dedicated hub or where a flexible definition of AI may lead to more confusion as to whether or not guidance is applicable to them.
Lastly, in circumstances where the EU AI Act will have extra-territorial application, the Government’s flexible approach to AI regulation may be of little help to businesses who plan on selling their AI tools/services to the EU and/or on having an EU user base, as they will need to adhere to what appear to be more stringent regulatory standards in any event or risk facing serious fines and bans.
The Government has noted that it will continue to work on global interoperability, for example through bilateral AI engagement with the EU, Canada, US, Singapore and international bodies like OECD and UNESCO.
What should you do now?
If you are planning on using / deploying AI tools (whether built in-house or supplied by third parties) it is important for you to have appropriate systems in place to vet the tool and ensure it complies with all applicable legislation currently in force. This will necessarily involve a holistic overview.
Whilst the EU AI Act and further UK Government guidance is pending, it is important for you to ensure that – to the extent possible – you are satisfied that your use of any AI tool can comply with, for example, data protection regulation and would be capable of satisfying the Government’s key principles.
Consultation period
As mentioned, we will be responding to consultation paper and are very interested in hearing from you before we submit our comments.
Following the consultation period, the Government will publish its response and issue cross-sectoral principles to regulators. The Government is also expected to publish its AI Regulation Roadmap.
We will be reporting regularly on these developments so stay tuned for more.
For further information, please contact:
Giulia Trojano, Partner, Withersworldwide
giulia.trojano@withersworldwide.com