24 September 2020
In this article, we consider how an AI system or the person(s) responsible for that system might be held accountable under the Copyright Act 1968 (Cth) (Copyright Act) for infringement arising during operation of the AI system or as a result of works produced.
AI systems can already create works mimicking a rapper's style of rapping, a painter's style of painting, and a writer's style of writing.
To date, the dialogue concerning AI-created works has focused on whether copyright subsists or should subsist in AI-created works. An equally interesting issue which has not been so widely discussed, is who should be held accountable where copyright infringement occurs in the course of the operation of an AI system. Can an AI system be sued? If not, who can a copyright owner sue for copying which occurs in an AI system's inputs or outputs?
You cannot currently sue an AI
Human authorship is a key premise in the Copyright Act and presents a challenge to copyright protection over AI-created works (see our article on this topic here). Given this, it is unsurprising that the infringement provisions in the Copyright Act also refer to infringement by a "person" (defined in section 2C(1) of the Acts Interpretation Act 1901 (Cth) as including a body politic or corporate as well as an individual).
This strongly suggests that, at this point in time, it is not possible for a copyright owner to commence proceedings against an AI system for copyright infringement. Further, as AI systems do not currently have legal personality under Australian law, and cannot currently own assets, there does not appear to be any utility in seeking to bring proceedings against them.
If you cannot sue the AI, then who can you sue? Will such proceedings be successful?
Creating an effective AI system is a complex endeavour which often involves a large number of people or teams with varying roles and levels of contribution. There are the people responsible for "training" the AI, by inputting data, which could potentially comprise copyright works. There may be different people who orchestrate the AI's output. However, both groups of people, together with any overarching company employing those groups, could potentially be sued for copyright infringement by the AI system.
The inputs
AI systems are not autonomous and, in order to enable them to function, they need to be trained using data linked to an eventual goal or desired output. If the data used to train an AI system are copyright works, and those works are used to train the system without seeking the consent of the copyright owner, the people responsible for "training" the AI may be found to infringe copyright, for example, by reproducing each copyright work in a digital form.
However, is it possible for the people who enable an AI system to function to rely on any of the exceptions to infringement in the Copyright Act?
Section 40(1) of the Copyright Act provides that "fair dealing" with a work or an adaptation of a work for the purpose of "research or study" does not constitute infringement of copyright. Therefore, some may argue that the machine learning being conducted by the AI-system is for the purpose of research or study.
However, such an argument is likely to be met with some difficulty. Historically, the research or study exception has been given a narrow interpretation. In particular, the purpose of the copying must be:
-
"research"; that is, diligent enquiry or investigation into a subject so as to discover facts or principles; or
-
"study"; that is, acquisition of knowledge or thorough examination or analysis of a particular subject (see, for example, De Garis & Anor v Neville Jeffress Pidler Pty Ltd (1990) 18 IPR 292 at 298.
Further, having the purpose of research or study is not sufficient by itself to enliven the exception. The use must also be a "fair dealing". Whether the use constitutes a "fair dealing" is determined by considering:
-
the purpose and character of the dealing;
-
the nature of the work;
-
the possibility of obtaining the work within a reasonable time at an ordinary commercial price;
-
the effect of the dealing upon the potential market for, or value of, the work; and
-
the amount of the work copied.
It is unlikely that an AI system will be conducting machine learning for the sole purpose of discovery or investigation of facts or principles for a subject. Rather, the AI system is more likely being trained for the purpose of creating outputs with a commercial utility. In these circumstances, reliance on the research or study exception is likely to be challenging.
Accordingly, people who use copyright works to train an AI system, which results in the AI system copying the whole or a substantial part of a copyright work, will likely be liable for copyright infringement. The principles of authorisation and joint tortfeasance mean that the company which employs those individuals will be jointly liable.
The output
As noted above, Australia's copyright infringement provisions contemplate infringement by a "person". Although this could certainly complicate matters in a future of "autonomous" robots (as we consider later in this article), for now, a level of human involvement is generally still required to create AI works. For example, while creation of The Next Rembrandt involved highly sophisticated deep learning algorithms and facial recognition techniques, these algorithms and techniques were only used to create individual features of the final portrait (e.g. eyes, nose, mouth, and ears). A human was still required to arrange those individual features into the portrait.
To establish infringement, a copyright owner must establish that the people who orchestrated the AI's output have taken a "substantial part" of their original work(s). Whether a substantial part has been taken is assessed qualitatively rather than quantitively.
In the case of The Next Rembrandt, assuming that copyright had not expired, the infringement analysis would be complicated by the team's goal of producing a work which could be attributed to Rembrandt "stylistically". Australian courts have previously found that infringement will not be made out if similarity does not go beyond the use of similar style, colour, subject matter and technique: see Cummins v Vella [2002] FCAFC 218.
What about projects where the goal is to create a work which cannot be attributed to a single artist stylistically? An example of this is how a group of Australians composed a new song for the world's first AI "Eurovision" contest. The group's song, "Beautiful the World", was composed using algorithms which generated phrases based on the lyrics from existing Eurovision songs composed by different artists. The team then selected and arranged those phrases using algorithmic "pattern matching" (a technique which matches words and melodies together) to create the final song.
A copyright owner may face several obstacles to proving infringement in this case as well.
First, a person who independently creates a work which is "coincidentally similar" to another person's work does not infringe the copyright in that work. There must be a causal connection between the copyright work and the creation of the infringing work; that is, there must be "copying".
This causal connection is likely to be satisfied if the copyright owner can prove that the AI (whose output is the subject of the copyright infringement claim) was "trained" with the copyright owner's work(s). Where it is likely to become more difficult for a copyright owner is where the particular AI was not "trained" with their work(s). In that case, and importantly, absent any human involvement during production of the output, it is arguable that any allegedly infringing outputs were "independent creations". How could the AI have "copied" the copyright owner's work(s) if it did not have access to them?
What if, however, a person then selects or modifies the output such that it is arguably a copy of a "substantial part" of the owner's work that the person is aware of? In that case, the human involvement arguably renders the work infringing. It follows that the "independent creation" argument is likely to become more relevant when, and if, human involvement is completely eliminated.
Final remarks
Holding human beings accountable for copyright infringement by an AI system would be complicated further if AI systems become "autonomous". Indeed, AI with "general intelligence" would likely be self-aware and capable of behaving in unpredictable ways (for example, capable of reprogramming themselves). In those cases, it would be a difficult task, ethically and legally, to identify a human responsible for such behaviour. One potential solution to this could be to attribute copyright infringement by an AI to a prescribed human responsible for the AI. This is one of many solutions proffered by commentators as a more realistic conception of the human-AI relationship in society.
The World Intellectual Property Organisation received hundreds of submissions and released a revised Issues Paper on IP policy and AI earlier this year. The revised Issues Paper identifies as potential issues, accountability for administrative decisions made by AI, and potential infringement issues for works produced through machine learning. Accordingly, it appears that the question of accountability for copyright infringement by AI will be the focus of reform moving forward. We will be sure to keep you updated on any developments.
For further information, please contact:
Nina Fitzgerald, Partner, Ashurst
nina.fitzgerald@ashurst.com