With the development of AI-based tools that can create images and videos at scale (such as DALLE-3 and Sora), a new phenomenon has emerged: deepfakes, images or recordings that have been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. While deepfakes have enabled creative possibilities (especially in marketing and entertainment), they can be misused for harmful purposes, such as fraud, defamation and false advertising. Addressing these risks involve some challenges for existing legal frameworks including consumer law and privacy.
Understanding deepfakes
At their core, AI generated deepfakes are not mere photo edits or voice impersonations. They are the product of advanced AI algorithms that analyse and synthesise vast amounts of data in order to generate new data in visual and audio form. In the case of video, these algorithms seamlessly blend fabricated elements with genuine footage, leaving viewers questioning the authenticity of what they see or hear. Whether it’s a politician delivering a fabricated speech or a celebrity engaging in fictional scenarios, deepfakes challenge our perception of reality.
While much attention has been focused on the nefarious potential of deepfakes—such as their use in spreading disinformation, revenge porn, or political manipulation—they also find legitimate applications. For example:
- Brands and marketers have begun harnessing deepfake technology to enhance their campaigns. Imagine an advertisement featuring a beloved historical figure endorsing a product or a virtual influencer delivering personalised messages to specific audiences.
- Filmmakers and content creators explore deepfakes as a tool for special effects, enabling realistic transformations and visual storytelling. Deepfakes allow for seamless integration of fictional characters or deceased celebrities into content, making it more engaging and memorable.
Recent developments in machine learning, artificial intelligence (including facial recognition algorithms and artificial neural networks (such as generative AI technology), such as the introduction of a text-to-video model Sora (by OpenAI) have increased the accessibility of deepfake generation capability and its prevalence.
As the technology evolves, we can expect even more innovative use cases beyond the realm of entertainment and advertising.
Deepfake-specific issues vs. other AI-related issues
As deepfake generation is powered by AI, there is a natural overlap between the issues specific to deepfake technology and those relating to AI more generally.
This article focuses on the particular legal considerations involved in the generation and use of deepfakes, but we have outlined below some common issues that arise in the broader discussion of AI that will also apply, including:
Further exploration of the broader issues in AI can be found here.
The rest of this article will delve into the deepfake-specific issues that may arise under Australian law.
Defamation
Deepfake creators may be liable for defamation (under the private law of tort, which regulates relations and interactions between citizen and citizen) if they produce defamatory material that damages a person’s reputation.
However, the tort of defamation under Australian law was originally designed to deal mainly with written or spoken material (e.g. newspapers). While courts have applied defamation in the context of videos and pictures, the nature of deepfakes may pose practical enforcement challenges for a victim of a deepfake making a claim of defamation. In particular, it may be difficult for a victim to identify who created or distributed the deepfake to the extent the deepfake was distributed anonymously such as via social media. Moreover, while defamation may offer monetary compensation to the victim, it may not stop the ongoing distribution or lead to the removal of deepfakes, which is often the main concern for victims. In these cases, plaintiffs would need to consider pursuing other courses of action based on the fact scenario against digital platform providers to have content removed, though this in itself has its own procedural and practical challenges.
Consumer law
Consumer law is a particularly relevant consideration when deepfakes are used in marketing and advertising to consumers. In Australia, consumer protections are found in the Australian Consumer Law (ACL) under Schedule 2 of the Competition and Consumer Act 2010.
By their nature, deepfakes misrepresent reality. So akin to any other form of misrepresentation, a deepfake must not be used to misrepresent or falsely advertise a product or service. Otherwise, this could constitute a breach of at least the following provisions in the ACL:
- Misleading and deceptive conduct: Section 18 of the ACL prohibits misleading and deceptive conduct in trade or commerce.
- False or misleading representations about goods or services: Section 29(1)(g) of the ACL provides that “a person must not, in trade or commerce, in connection with the supply or possible supply of goods or services or in connection with the promotion by any means of the supply or use of goods or services…make a false or misleading representation that goods or services have sponsorship, approval, performance characteristics, accessories, uses or benefits.”
However, a limitation of the above ACL provisions is that they only apply to conduct in “trade or commerce”. This means the ACL is only effective in protecting consumers where deepfake was created in trade or commerce. The ACL would not deal with arguably more damaging cases of deepfakes, such as revenge porn, fake news or rumours outside of the realm of trade or commerce (see our discussion on Online Safety below).
The protection under the ACL may also be limited in circumstances where a bad actor independently creates a deepfake to pass-off a replica product or service (to exploit an identity, brand or business). In such instances, even if the deepfake advertisement was made in trade or commerce, the identity of the bad actor is not necessarily known to the consumer, which could pose difficulties for enforcement by the supplier. Nor is the actual supplier responsible for deepfake advertisement (or any fake advertisement) that it did not create and publish. This leaves limited to no remedies for the consumer to the extent they have been misled by the deepfake advertisement.
IP infringement
In addition to the classic IP questions around AI generated works (as explained above), one novel issue is whether a deepfake of a real life individual infringes on the IP rights (if any) of that individual?
There is currently no law in Australia that recognises IP rights in one’s own face or voice per se. Currently, an author can own the copyright of a work depicting their face or recording their voice (for example, an image or video created by that author), or may be able to register an image of their face as a trademark. This creates a gap in the law where there is no requirement for a deepfake creator to obtain consent from an individual before creating a deepfake of them, at least from an IP perspective (assuming the deepfake is not derived from a copyright-protected work).
This means Australian IP law does not immediately protect an individual from being subject of a deepfake unless that individual can prove that the deepfake reproduces a substantial part of a copyright-protected work or the process of creation of the deepfake involved the unauthorised use of the copyright-protected work depicting that individual (i.e. infringement). However, there are challenges in proving infringement given the black box nature of deepfakes (and AI generally).
In this context, the tort of passing off may, in some circumstances, provide an avenue for protection where an individual’s image is commercially misappropriated. In the UK, the Court of Appeal in the case Fenty v Arcadia Group held that an unauthorised use of pop star Rihanna’s image on t-shirts constituted passing off. Similarly, in Australia, a magazine was found to have engaged in both passing off and misleading & deceptive conduct for using an image and biography of Ita Buttrose without her permission (Buttrose v The Senior’s Choice (Australia) Pty Ltd [2013] FCCA 2050). In Australia, in most cases it is likely that claims of this nature will overlap with claims under s 29 of the Australian Consumer Law (as discussed above).
Privacy
Deepfakes are created based on analysing audio, images or videos of real things and real people – their facial features, voice patterns and the like. Images and other recording of an individual that is identifiable or reasonably identifiable constitutes ‘personal information’ under the Australian Privacy Act 1988 (Cth) (Privacy Act). The image of an individual’s face or their voice used for biometric identification are considered ‘sensitive information’ under that Act, a subset of personal information which is afforded a higher degree of protection. Deepfakes content may also contain additional sensitive information about an individual’s political opinions, religious beliefs or their health. It is important to note that information will be ‘personal information’ or ‘sensitive information’ whether it is true or false, with the latter drawing the concepts of deepfake and synthetic media into the fold.
Relevantly, an individual’s informed consent is required under the Privacy Act to:
- collect the individual’s sensitive information; or
- use or disclose the individual’s personal information for a purpose other than the primary purpose for collection or a secondary purpose that is not related (directly related in the case of sensitive information) to that primary purpose and within the individual’s reasonable expectations.
Failure to obtain consent to such collection, use or disclosure constitutes an interference with privacy. Penalties for serious or repeated interferences were significantly increased in 2023 to the highest of $50 million, 3 times the value of the benefit obtained by the company and any related body corporate from the breach; or if a court cannot determine the amount of the benefit, 30% of the ‘adjusted turnover’ of the company’s Australian group during the ‘breach turnover period’, which will be 12 months or longer if the breach period was longer.
There are some limitations to the protections afforded under the Privacy Act, which leaves a certain loophole in the law when it comes to deepfake technologists. For example, the Act does not apply to organisations with an annual turnover less than $3 million, individuals, and organisations that do not do business in Australia.
Reforms are underway that are intended to better protect the personal information of individuals including against their misuse in deepfakes. The Government has committed to introduce in August 2024 a draft legislation against doxxing, being the intentional online exposure of an individual’s identity either by an individual or by an organisation, private information or personal details without their consent, as well as several other reforms to existing privacy laws. Proposed changes relevantly include:
- the introduction of a requirement that any use of personal information be fair and reasonable;
- the removal of the small business exemption;
- the introduction of new right for individuals to request for their personal information (including data published online) to be deleted aligned with the GDPR right to be forgotten;
- the introduction of a statutory tort for invasion of privacy (which could be enforced against individuals as well as organisations);
- the clarification in the Act that the collection of personal or sensitive information includes the ‘generation’ of such information, consistent with existing guidance (meaning the consent requirement would expressly, apply to the generation of images of an individual’s face or their voice); and
- the amendment of the definition of consent in the Act to expressly provide that it must be voluntary, informed, current, specific and unambiguous and can be withdrawn.
See also our note on the Australia privacy law reforms.
Online Safety
In response to the proliferation of online cyber abuse, the Australian Government enacted the Online Safety Act 2021 (Cth) (OSA) and the Online Safety (Basic Online Safety Expectations) Determination 2022 (BOSE Determination) which focusses on protecting Australians from online harms resulting from exposure to illegal or harmful online content or behaviour, particularly with regards to the creation and dissemination of “non-consensual intimate images” and “material that depicts abhorrent violent conduct”. To the extent a deepfake material falls within the above categories, the OSA and BOSE would apply.
But as GenAI products continue to rapidly evolve and increase in capability (for better or for worse), the Australian federal government has made initial steps to update the OSA regulation:
- Between November 2023 and January 2024, the Government consulted into amendments to the BOSE (i.e. expressed in the Online Safety (Basic Online Safety Expectations) Amendment Determination 2023). The proposed amendments included a new expectation that service providers must take reasonable steps to proactively minimise the extent to which generative AI capabilities produce material or facilitate activity that is unlawful or harmful. This would include ‘deepfake’ intimate images or videos, or any generated media that facilitates online abuse or hate speech.
- In May 2024, as part of a suite of measures tacking domestic violence, the Government launched a statutory review of the OSA, including a proposal to ban deepfake pornography.
At the time of writing, the Government has yet to release updates on the above initiatives.
In parallel to the online safety law reforms, there has been recent developments to address deepfakes (particularly sexually explicit deepfakes) on the criminal law front. In June 2024, the Attorney General introduced the Criminal Code Amendment (Deepfake Sexual Material) Bill which will amend the Criminal Code Act to create new offences around non-consensual transmission of sexually explicit material. While the bill is drafted in technology neutral language, it includes an interpretive provision which states that it is irrelevant whether the material transmitted “has been created, or altered in any way, using technology” as well as a footnote that says the above includes “images, videos or audio depicting a person that have been edited or entirely created using digital technology (including artificial intelligence), generating a realistic but false depiction of the person. Examples of such material are “deepfakes””. At the time of writing, this bill has moved through the first and second readings of the House of Representatives.
Conclusion
The legal landscape of deepfakes is evolving as quickly as the technology itself. The Australian Government has shown a clear intention to regulate and prevent the harmful use of deepfakes, especially for online abuse and harassment. However, there are still many challenges and questions that remain unanswered. How can service providers effectively detect and remove deepfake content? How can victims of deepfake abuse seek justice and compensation? How can the legitimate and creative use of deepfake technology be balanced with the potential risks and harms? These are some of the issues that will shape the future of law and society in the age of AI.
For further information, please contact:
Julian Lincoln, Partner, Herbert Smith Freehills
Julian.Lincoln@hsf.com