Introduction: When Seeing Is No Longer Believing
Rapid advances in Artificial Intelligence (“AI”) technology have transformed how digital content is created and consumed. While AI has enabled innovation and efficiency, it has also introduced serious risks of exploitation, misappropriation, and deception. One of the most disruptive developments is synthetic media, which is a content generated or manipulated by AI to convincingly imitate real people. As synthetic media becomes increasingly indistinguishable from reality, modern legal systems are being forced to reassess traditional legal assumptions surrounding consent, truth, and ownership.
Understanding Synthetic Media and Deepfakes
Synthetic media refers to digital content created by AI rather than by direct human touch. Common examples include deepfake videos, virtual humans, and augmented reality visuals. A deepfake is a specific form of synthetic media, first popularised in 2017 through online face-swap videos. Since then, deepfake technology has evolved rapidly, enabling the creation of highly realistic audio and video that can depict individuals saying or doing things that never occurred.
How Deepfakes Work: The Technology Behind the Deception
At its core, deepfake technology uses AI to learn patterns from real human faces and voices and then reproduce them. Although the results may appear futuristic or fictional, the process relies on established machine-learning techniques. The primary system used is Deep Learning, particularly Generative Adversarial Networks (“GANs”). A GAN consists of two competing AI systems: a generator, which creates fake images or videos, and a discriminator, which attempts to distinguish fake content from real content. Through repeated competition, the generator improves until the fake content becomes highly realistic.
In addition to GANs, many deepfakes use autoencoding techniques, involving an encoder and a decoder. The encoder breaks facial images into core features such as eye shape and mouth movement, while the decoder reconstructs a face using those features. By combining one person’s facial identity with another person’s movements, AI produces convincing face-swap videos where the motion is real, but the identity is fabricated. This technical sophistication explains why deepfakes pose significant legal challenges.
The Legal Response: Three Emerging Pillars
As of 2026, legislatures worldwide have begun responding to synthetic media by restructuring regulation around three key legal pillars which are safeguarding intellectual property, the protection of personal dignity and preservation of democratic integrity.
Pillar I: Intellectual Property and Digital Personality Rights
Traditionally, intellectual property law offers limited protection over a person’s voice or likeness, as these were not historically considered “copyrightable”. Nevertheless, deepfake technology has exposed the inadequacy of this approach.
As a response, the United States (US) has taken significant legislative steps. For instance, Tennessee enacted the Ensuring Likeness, Voice, and Image Security or ELVIS Act 2024, expanding the right of publicity to protect an individual’s voice and likeness from unauthorised AI replication.
At the federal level, the Nurture Originals, Foster Art, and Keep Entertainment Safe or the NO FAKES Act 2024 further strengthens protection against unauthorised AI-generated recreations.
These laws were catalysed by the viral 2023 release of “Heart on My Sleeve”, which is a song falsely attributed to Drake and The Weeknd as the tune ended up being created using AI by a TikTok user Ghostwriter977. Before being exposed as fake and removed from the platform, the track spread rapidly across the TikTok and Spotify acquiring hundreds of thousands of listens as well which heavily highlighted how AI could commercially exploit an artist’s identity, voice and likeness without consent, prompting urgent legislative reform.
Another high-profile example occurred in October 2023, whereby AI-generated version of Tom Hanks was used in advertisements for a dental plan that he never appeared in or otherwise endorsed. Therefore, the NO FAKES Act is created to address these non-consensual digital replications and to hold individuals or companies liable for the production of unauthorised digital replication of individuals. Together, these developments signal a legal shift toward recognising digital personality as a protectable proprietary interest.
Pillar II: Personal Dignity and Non-Consensual Deepfakes
The protection of personal dignity is the most strictly regulated area, particularly concerning non-consensual intimate deepfakes. In the US, the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act or TAKE IT DOWN Act 2025 has been enacted to criminalise the distribution of non-consensual intimate deepfakes and requires online platforms to remove such content within 48 hours of a valid request.
This reformation was also driven by the real-world harm. In October 2023, a male student from Aledo High School, Texas had taken “innocent” photos of 14-year-old female students and used AI to create sexually explicit versions of them which then circulated on Snapchat. At the time, Texas law contained loopholes that failed to cover manipulated images, and schools lacked authority over off-campus conduct. Victims and their families were left without meaningful legal recourse, prompting national advocacy that culminated in the TAKE IT DOWN Act 2025.
Similarly, in Jane Doe v ClothOff (2025), a minor in New Jersey sued over an AI platform known as “ClothOff” that converted her fully clothed social media images into hyperrealistic porn content. The case became a landmark federal lawsuit highlighting the necessity of platform liability and rapid content removal mechanisms under the new legislation.
New Zealand offers a notable example of legislative reform as well. Under the Harmful Digital Communications Act 2015 (“HDCA”), posting an “intimate visual recording” without consent is a criminal offence punishable by fines or imprisonment. However, the original law struggled to address AI-generated imagery, as deepfakes were not considered “real” recordings.
To address this loophole, New Zealand introduced the Deepfake Digital Harm and Exploitation Bill, amending the Crimes Act 1961 and the HDCA to explicitly include synthetic or altered imagery. This reform ensures that AI-generated intimate deepfakes are treated as real for criminal law purposes.
This also alerted the New Zealand government as the Netsafe New Zealand, an approved agency under HDCA, which reported 68% spike in “digital extortion” in New Zealand cases linked to AI deepfake threats with victims as young as nine years old. These statistics underscore the urgency of strengthening legal protections against synthetic media harms.
Pillar III: Democratic Integrity and Transparency Obligations
Apart from that, synthetic media poses systemic risks to democratic integrity, particularly the risk of AI-powered misinformation and political manipulation. To address this, jurisdictions are increasingly mandating transparency.
The European Union Artificial Intelligence Act (EU AI Act) imposes strict disclosure obligations under Article 50, requiring AI-generated or AI-modified content, including deepfakes, to be clearly labelled. Where deepfakes are used in real-time contexts, disclaimers must appear from the first second of display.
Norway, as a member of the European Economic Area (EEA), has aligned its national framework through the proposed Norwegian AI Act 2026. Adopting a risk-based approach, with the statutory appointment of the Norwegian Communications Authority (Nkom) as the primary supervisory body and emphasising transparency and preventative governance.
Conclusion: Law in the Age of Synthetic Reality
In conclusion, synthetic media has fundamentally challenged existing legal frameworks. As AI blurs the line between reality and fabrication, the law is increasingly repositioning itself to protect human dignity, democratic trust, and personal identity. While regulatory approaches differ across jurisdictions, a common trend is evident considering that AI is no longer treated as a neutral tool but as a technology requiring active legal governance.

For further information, please contact:
Ahmad Hafiz Zubir, Partner, Azmi & Associates
hafiz.zubir@azmilaw.com




