Many jurisdictions are basing their approaches towards AI-generated deepfakes on existing data-privacy and cybercrime laws. We look at relevant developments in some of Rouse’s main markets.
China
The drivers shaping Artificial Intelligence (AI) regulations in China are the same as those shaping online content overall. AI and related technologies should not undermine public security, individual privacy and the integrity of personal data.
In 2019, Zao, a face-swap app launched in China, causing a stir. Using deepfake technology, users could superimpose their faces upon those of celebrities in 30-second movie clips. Zao went straight to the top of the Apple App Store.
Concerns arose when it became known that Zao collected a significant amount of personal data. Users were granting extensive rights over their images and content created. This shone a spotlight on data security and the ethical use of AI, with Zao revising its user agreement.
In 2023, China implemented legislation entitled Provisions on the Administration of Deep Synthesis Internet Information Services. It puts obligations on deep synthesis providers to ensure the technology does not breach laws or cause public harm. It aims to combat misinformation by requiring labelling of deepfake content.
The European Union
Any providers and moderators of deepfake content are regulated under the Digital Services Act (DSA) and the new EU AI Act.
Under the EU AI Act, systems that generate or manipulate images, audio or video content are required to meet minimum transparency standards. Service providers must inform users when interacting with an AI system, unless it is evident. Transparency relating to artificially generated or deepfake content is also required.
The DSA stipulates that providers who moderate user-generated content – including deepfakes – must be transparent about their moderation rules and enforcement mechanisms. Notice-and-takedown procedures must also be provided.
The GCC Region
In the United Arab Emirates, the law prohibits use of personal information and likeness without express consent. As such personality rights can be protected, including in the context of deepfakes. The Counsel for Digital Wellbeing issued the Deepfake Guide, differentiating between ‘shallow’ basic edits to videos and ‘deep’ fakes utilising advanced AI to replace faces and voices.
Relevant legal frameworks include Federal Law No. 45 of 2021 on the Protection of Personal Data (PDPL), which prohibits processing personal data without explicit consent. The UAE’s cybercrime legislation punishes the modification or dissemination of personal information with the intent to defame or insult, which can also be applied to deepfakes.
———-
Saudi Arabia is another country where existing laws provide a framework for controlling deepfakes. The cybercrime law covers a broad spectrum of illegal activities and privacy violations. Penalties exist for crimes deemed to affect public order and religious values.
Saudi Arabia also has Generative Artificial Intelligence Guidelines that apply to all users. These stress adherence to relevant legislation around data protection and intellectual property. Accountability is placed on the designers, vendors, procurers, owners and users of generative AI systems. Parties are held ethically and legally responsible for any negative impacts.
Southeast Asia
For both Thailand and Indonesia neither has specific regulation targeting deepfakes. However existing laws robustly address the misuse of such technology.
In the example of Indonesia, the Criminal Code, Electronic Information and Transaction Law, and Data Protection Law collectively prohibit the use of deepfakes for disseminating false information, creating fake pornography, or defamation. These laws provide clear sanctions against both creators and disseminators of deepfake content.
———Existing legislation in Vietnam regulates deepfakes via protections for individual rights and prohibiting the spread of false information. Criminal charges may relate to illegal dissemination of confidential information, slander and humiliation. Also, the use of deepfakes in conducting cybercrimes such as phishing, malware attacks, etc. can lead to data breaches, network hijacking and other violations of the laws, including criminal accusations. Moreover, AI’s reliance on extensive data sets may result in unauthorized data access, which is prohibited under data protection laws.
What is the best practice approach towards using deepfakes in marketing?
Be transparent – Notify users and label content which is AI generated or a deepfake. Be clear about the nature of content and how it has been manipulated.
Ensure existing rights are not violated – Be clear on the source of AI training data to avoid any copyright issues, such as unintended likeness to celebrities or other protected images (copyright, trademark or designs). Due diligence and clearance should be undertaken on any AI generated content prior to use. AI users should be trained to understand the risks and act accordingly.
When working with influencers – including use of AI to generate their likenesses by agreement – ensure that both parties’ ownership and usage rights are clearly defined. This will avoid potential conflicts.
Adhere to data and privacy laws – Consider how personal data is collected, used, stored and where applicable, any cross-border data transfer activity.
Partner with trusted AI providers – with processes in place to manage and mitigate risks.
For further information, please contact:
Holly White, Rouse
hwhite@rouse.com