The UK Science, Innovation and Technology Select Committee (which recently conducted an inquiry into the impact of AI on several sectors) has published The Governance of Artificial Intelligence: Interim Report (pdf here). The report identifies 12 challenges of AI, including that for intellectual property, and recommends legislation during this parliament (ie before the general election due in 2024). The Select Committee expresses concerns that the UK will fall behind if there are delays, given the moves made by the EU and US to regulate AI already. On IP it recommends that where AI models and tools make use of other people’s content, policy must establish the rights of the originators of this content, and these rights must be enforced.
The need for regulation now and the establishment of an international forum on AI: The report encourages the UK Government to go direct to legislation on AI regulation rather than to apply the approach set out in its white paper of March 2023. The white paper used five principles to frame regulatory activity, guide future development of AI models and tools, and their use – but these principles were not to be implemented via statute but were to be “interpreted and translated into action by individual sectoral regulators, with assistance from central support functions“.
The report recognises that although the UK has a long history of technological innovation and regulatory expertise, which “can help it forge a distinctive regulatory path on AI“, the AI white paper is only an initial effort to engage with AI regulation and it’s approach risks the UK falling behind given the pace of development of AI and especially in light of the efforts of other jurisdictions, principally the European Union and United States, to set international standards.
The report suggests “a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.”
An international summit on AI safety is expected to held in the UK in November or December, will also be key and the report recommends invitations being extended to as wide a range of countries as possible to create a forum “for like-minded countries who share liberal, democratic values, to ensure mutual protection against those actors—state and otherwise—who are enemies of these values and would use AI to achieve their ends.”
12 essential challenges of AI identified: The report identifies the challenges of AI in general and twelve essential challenges that AI governance must address if public safety and confidence in AI are to be secured: – including IP at challenge 8:
- The Bias challenge. AI can introduce or perpetuate biases that society finds unacceptable.
- The Privacy challenge. AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
- The Misrepresentation challenge. AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
- The Access to Data challenge. The most powerful AI needs very large datasets, which are held by few organisations.
- The Access to Compute challenge. The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
- The Black Box challenge. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
- The Open-Source challenge. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
- The Intellectual Property and Copyright Challenge. Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
- The Liability challenge. If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
- The Employment challenge. AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
- The International Coordination challenge. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
- The Existential challenge. Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security
In relation to challenge 8 on Intellectual Property and Copyright – the report comments that “Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced” and that whilst the use of AI models and tools have helped create revenue for the entertainment industry in areas such as video games and audience analytics, concerns have been raised about the ‘scraping’ of copyrighted content from online sources without permission.
The report refers to “ongoing legal cases” (unnamed but likely a reference to Getty v StabilityAI) which are likely to set precedents in this area, but also notes that the UK IPO has begun to develop a voluntary code of practice on copyright and AI, in consultation with the technology, creative and research sectors, which guidance should “… support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work”. The report notes that the Government has said that if agreement is not reached or the code not adopted, it may legislate.
The withdrawal of the proposed text and database mining exception, following pressure from creative industries, is noted, but also that other parties have commented that this now “… prevents the UK from capitalising on the diverse, agile and creative benefits that AI can bring to the UK’s economy, its society and its competitive research environment”.
On the Liability challenge (9) the report considers that if AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
For further information, please contact:
Rachel Montagnon, Herbert Smith Freehills
rachel.montagnon@hsf.com