The AI Explosion
It’s probably no exaggeration to say artificial intelligence (AI) exploded into the public consciousness in late 2022 and early 2023.
ChatGPT, the AI chatbot from OpenAI, reached an astonishing 100 million monthly active users in January 2023, just two months after its launch, beating out TikTok (nine months) and Instagram (two and half years)[1] in the time taken to reach that figure.
Not as fast, perhaps, but since their public release in 2022, both Midjourney, Stable Diffusion, Stability AI, and DALL-E 2, from OpenAI, have attracted millions of users.
Now capable of producing stunning artwork in seconds, generative AI technology has been used to produce millions of images, music, lyrics, and articles.
The meteoric rise of AI has given new life to the age-old question of whether machines will eventually replace humans, this time in the art and creative spheres, and prompted dozens of lawsuits from those humans battling to establish clear guidelines about copyright.
Artists have sued over alleged use of their work by programmers to train their AI algorithm[2] raising the rather philosophical question of whether a machine is capable of creating art?
The answer has far-reaching real life consequences, particularly in the field of copyright.
Artists, AI and copyright
The generally accepted principle is that copyright laws aim to both encourage authors and artists to create novel works and to ensure that having done so, they are able to receive fair compensation for their efforts.
Which raises the question of whether work created by AI, which is not (yet) sentient and requires no reward or compensation for creating works of art, be afforded the same copyright protections?
For the time being, the legal world has generally replied in the negative, maintaining that only work created by human authors can be protected by copyright:-
- The United States Copyright Office, in denying copyright registration to the graphic novel Zarya of the Dawn generated with Midjourney technology[3], affirmed that copyright does not protect works created by non-human authors;
- In the landmark Infopaqcase (C-5/08 Infopaq International A/S v Danske Dagbaldes Forening), the European Court of Justice ruled that copyright only applies to original works reflecting the “(human) author’s own intellectual creation”;
- In Australia, the Federal Court of Australia ruled that phone directories authored by computers are not protected by copyright, notwithstanding the presence of some input from human editors[4].
Some countries, however, have decided to address this issue by attributing authorship and thus copyright of computer-generated work to the humans who programmed AI to generate the work. This interpretation was pioneered in the UK under section 9(3) of the Copyright, Designs and Patents Act 1988 (the “CDPA”), which states that:
“In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
In section 178 of the CPDA, computer generated works are defined as works “generated by computer in circumstances such that there is no human author of the work”, thus acknowledging the possibility of work without human authors[5].
In passing the bill, the late Lord Young of Graffham, then the Secretary for Trade and Industry, commented “We believe this to be the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence…the far-sighted incorporation of computer-generated works in our copyright system will allow investment in artificial intelligence systems, in the future, to be made with confidence.”[6].
This piece of legislation demonstrated remarkable foresight on the part of UK lawmakers, considering the CPDA was drafted in 1987, when computers were just starting to become available to the general public.
Similar provisions soon found their way to the law books of jurisdictions strongly influenced by the UK legal system, such as Hong Kong, India and New Zealand. For example, section 11(3) of the Copyright Ordinance (Cap. 528) of Hong Kong provides that:-
“In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
On the face of it, these provisions, which will be referred to as the “Arrangement Model” in this article, seem to provide a simple and elegant solution to the conundrum posed by generative AI technology. Whoever does the work in “preparing” an AI to create a work is the author and copyright owner.
It also seems to match the “sweat of the brow” intellectual property doctrine, which states whoever has the skill and puts in the time and effort to create the work deserves protection.
However, I would argue the Arrangement Model does not adequately reflect how modern generative AI operates and creates massive legal uncertainty.
This article will explore the major shortcomings of the Arrangement Model in attributing copyright to AI-generated works.
Prompts, algorithms and iteration
Broadly speaking, modern AI operates via “machine learning”.
It doesn’t rely on direct instructions carefully written into a program by a programmer, which provides precise steps for the machine to follow to complete the task.
Instead, the machine combines large amounts of raw data with iterative and intelligent algorithms to discern patterns in the data from which it can learn to complete the task without any direct input from a programmer.
The output can be improved by feeding prompts to the machine that “learns” by further refining its data analysis to find more complex and efficient patterns without the developers’ intervention or input.
This leads to the first problem under the Arrangement Model.
How to identify the person who “makes the necessary arrangements for the creation.”
Let’s say a user asks the machine to create a picture of a cat with an apple. They would type in a text prompt such as “Create a picture of a cat holding an apple.”
The machine would then search, usually online, for any references or pictures of cats, apples and of cats holding apples. It would then use the algorithms programmed into it to analyse the data, discern patterns and reproduce its own version of a picture.
Further prompts from the user, for example, “create the picture in the style of Van Gogh” would lead the machine to run further data analysis on references to the artist Van Gogh, discern patterns in the painting style then attempt to reproduce those techniques in its own picture.
All of this complicates answering the question of who made the necessary arrangements.
Is it the user who wrote the prompts? Is it the programmers who wrote the algorithms the computer used? Or is it the artists of the original pictures used by the machine in its data analysis?
Arguably it’s “all of the above.”
- The artwork would not be generated but for the text prompts entered by the user;
- The artwork cannot be generated if the developers/programmers had not written the algorithms;
- The artwork cannot be generated if no original pictures are available for the AI to reference and learn from.
It could be argued all of the above, or at least the users and developers, could be joint authors or co-authors, but the present conception of “joint authors” and “co-authors” in copyright laws all pre-suppose a certain degree of collaboration or common design, which is clearly absent in most cases involving generative AI works.
In most cases, developers of AI systems do not collaborate with users in any specific work. They may not have any idea what the users are generating using the AI tools they developed.
That AI programmes can operate autonomously without the developers’ input is the exact purpose of developing AI technology in the first place. So either the definition of joint authorship or co-authorship will need to be changed, or the concept of joint authorship/co-authorship simply does not apply.
Algorithms, not creativity
A related problem with the Arrangement Model is it may attribute authorship to people who have no creative input or even creative intent at all. Notably, the provision of “mak(ing) the necessary arrangements for the creation” does not specify that the arrangements must be creative.
The role of developers in AI is largely about writing algorithms and providing data the machine can learn from using those algorithms. In most cases, developers are not responsible for generating the final work.
Since developers have no creative input in the end product and may not even have any intention to create any kind of artwork, it is arguable that attributing authorship to them runs contrary to the basic premise of copyright laws. A comparable analogy would be that camera manufacturers do not claim copyright ownership over photographs taken by people using their cameras.
“Black boxes”
A third issue arises over the so-called “black box” problem.
The black box problem is the inability of humans to understand the decision-making process of AIs and, thus, their inability to predict its decisions and output.
Often, the decision-making algorithms in present-day AI are so complex even their creators don’t fully understand, let alone control, them.
For example, the developers of MidJourney do not know what images will be generated from specific prompts, as developers are unlikely to have had any particular prompts in mind when they wrote the algorithms.
Allocating authorship and copyright to creators who may not even know what they have created is not only problematic in itself, it could also expose them to unintended legal liability.
This may be the reason why under the terms of some AI service operators like OpenAI, users are expressly stated to be the copyright owners of any content they generate using the AI services[7].
The root of the problem is that the Arrangement Model as it currently stands seemingly conflates “computer-aided” works with “computer-generated” works.
The former is where works are created with significant human intervention or direction. The computer is merely used to express the author’s creative intent and is already protected by standard copyright doctrine.
The latter is where work is generated with little or no little human input leaving copyright subsistence in question.
It’s likely “computer-aided” works were what lawmakers had in mind when drafting the provisions in the 1980s or 90s when no generative AI existed.
As the question of human intervention is not a black-and-white matter but lies on a spectrum, advancements in AI technology are increasingly pushing the definition towards the “computer-generated” end of the spectrum as the amount of human intervention decreases.
If either term is to serve any useful purpose, there would need to be some creative threshold requirement built in to determine on which end of the spectrum the work in question lies, which the Arrangement Model currently lacks.
This uncertainty is reflected in Nova Productions Ltd v Mazooma Games Ltd. [2006] R.P.C. 14, one of the few cases where section 9(3) of the CPDA was applied in the Courts.
The subject was a video game simulating a game of pool. Kitchin J (as he then was) held that section 9(3) is applicable to game frames containing graphical assets (bitmap images) of a pool table, cues and balls and the programmer involved in writing the game and setting up the graphical assets is the author by virtue of section 9(3).
Whilst the conclusion reached by Kitchin J is undoubtedly correct, it’s arguable whether section 9(3) was correctly applied.
The graphical assets are clearly artistic works which can be protected in their own right. The programmer created the graphics and put them in the game and is the undisputed author. It is unclear why section 9(3) was invoked.
The International Dimension
To add to the confusion, there is an international dimension.
Most jurisdictions do not accord copyright protection to AI-generated works.
The international copyright framework under the Paris Convention for the Protection of Industrial Property does not prescribe a universal regime for copyright; it only mandates equal treatment for work created by foreign nationals within a jurisdiction.
This means if Hong Kong recognises that AI-generated works are protected by copyright, it must accord equal protection to all AI-generated works created in all Paris Convention signatory countries.
This inevitably leads to a system where AI-generated works are protected in some countries but not others. This could lead to forum shopping where prospective copyright owners deliberately choose to take out proceedings in jurisdictions that recognize copyright in computer-generated work.
In 2021, the UK’s Intellectual Property Office called for views on copyright works created by AI systems and possible legal reforms, including possibly removing any protection.
Based on the responses received, the government agreed “the current approach to computer-generated works is unclear, and that there is a case for reconsidering it.”
However, it ultimately decided not to change the existing law on the grounds “(t)here is no evidence at present that protection for (computer-generated works) is harmful, and the use of AI to generate creative content is still in its early stages… the future impacts of this provision are uncertain. It is unclear whether removing it would either promote or discourage innovation and the use of AI for the public good.”[8]
Conclusion
AI development is still in its infancy, and inevitably, there are many open questions.
I would argue that according protection, where it is unclear whether rights exist or should exist, is contrary to the spirit of intellectual property law.
Given copyright confers wide monopolistic rights on its owner, I would argue that in uncertain cases, rights protection should be narrow and restrictive in scope. Abuse of copyright, for example, by issuing unmeritorious take-down notices to online platforms or platforms relying on algorithms to mechanically vet copyright infringement claims without considering the applicable defences, is already not uncommon under the current system.
Given the influx of AI-generated works, these kinds of abuse are likely to rise in the future. Until a consensus on copyright ownership of AI-generated works can be reached, granting more rights than intended may do more harm than good.
Disclaimer: This article is provided for information purposes only and does not constitute legal advice.
For further information, please contact:
Anthony Leung, Haldanes
anthony.leung@haldanes.com
[1] Krystal Hu (2023), “ChatGPT sets record for fastest-growing user base – analyst note”, Reuters, as retrieved from https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ on 17 March 2023
[2] Stephanie Mlot (2023), “Artists Sue AI Art Generators for Copyright Infringement”, PC Magazine, as retrieved from https://www.pcmag.com/news/artists-sue-ai-art-generators-for-copyright-infringement
[3] United States Copyright Office (2023), “Re: Zarya of the Dawn (Registration # VAu001480196)”, as retrieved from https://fingfx.thomsonreuters.com/gfx/legaldocs/klpygnkyrpg/AI%20COPYRIGHT%20decision.pdf on 17 March 2023
[4] Telstra Corp Ltd v Phone Directories Co Pty Ltd [2010] FCAFC 149
[5] Please however note that there is usually some distinction in treatment of computer-generated works and other works, such as a shorter duration of protection.
[6] Hansard – Copyright, Designs And Patents Bill Hl
Volume 489: debated on Thursday 12 November 1987, as retrieved from https://hansard.parliament.uk/Lords/1987-11-12/debates/9b959a7b-172a-4e28-8676-1a6747b0f370/CopyrightDesignsAndPatentsBillHl on 17 March 2023.
[7]OpenAI Terms of Use (2023), as retrieved from https://openai.com/policies/terms-of-use on 17 March 2023.
[8] UK Intellectual Property Office (2021, 2022) “Artificial intelligence call for views: copyright and related rights”; “Artificial Intelligence and Intellectual Property: copyright and patents”, as retrieved from https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views/artificial-intelligence-call-for-views-copyright-and-related-rights and https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views/government-response-to-call-for-views-on-artificial-intelligence-and-intellectual-property; and https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents on 17 March 2023