With AI now at the forefront of technological innovation, the Rouse team has put together a summary of relevant legislation in our main markets.
China
Having set the goal of achieving global leadership in AI by 2030, China is one of the world’s three main regulators in the field (along with the EU and US). To date, the country has approved more than 300 recommendation algorithms, 940 deep synthesis algorithms, and 40 Large Language Models (LLMs).
Chinese regulators address specific AI technologies as they emerge, aiming to achieve a balance between innovation and security. The country’s investment into AI is expected to reach USD 14 billion in 2024. Development is steered by regulations that include content controls on AI training data and AI generated content.
Key AI legislation in China includes:
- Provisions on the Management of Algorithmic Recommendations in Internet Information Services (in force 1 March 2022). Regulates the algorithms that drive social media sites, search engines and personalised recommendations. Seeks to protect user rights, control illegal information and prevent inappropriate use such as fake accounts
- Provisions on the Administration of Deep Synthesis Internet Information Services (in force January 2023). Aims to combat misinformation and ensure services comply with laws, ethics and security requirements
- Interim Measures for the Management of Generative AI Services (in force August 2023). Regulates both the training data and outputs of LLMs like ChatGPT to enable responsible development of Generative AI
The European Union (EU)
The EU AI Act (will soon enter into force) aims to promote the responsible development of AI systems by ensuring transparency, accountability and the protection of fundamental rights.
The act takes a risk based tiered approach towards AI systems. High-risk systems – i.e. those that could significantly affect individual rights – must comply with strict requirements for risk management, data governance and technical documentation. There are significant obligations on providers of high-risk systems, regardless of their location.
Compliance requirements for Limited, Minimal or no risk systems are lower.
The Digital Services Act is a legal framework to update and enhance regulations for online marketplaces, search engines and other platforms. The DSA specifically addresses the spread of illegal content and disinformation, so is applicable in the context of AI-generated deepfakes, for example.
The AI Liability Directive (still under negotiation) aims to ensure that individuals harmed by AI systems receive the same level of protection as those affected by other technologies. It seeks to harmonise non-contractual civil-liability rules across the EU for damages caused by AI systems.
The GCC region
Having established a Ministry of AI in 2017, The United Arab Emirates has embarked on a comprehensive strategy for integrating AI into various sectors.
However, AI’s role in intellectual property (IP) rights remains a debated and evolving topic. While the UAE’s Copyright Law (Federal Decree Law No. 38 of 2021) includes protections for smart applications, computer programs and databases, it does not explicitly address AI-generated works.
————
AI is integral to Saudi Arabia’s Vision 2030 for diversifying the nation’s economy, with the Saudi Data and Artificial Intelligence Authority (SDAIA) leading the country’s AI initiatives. SDAIA operates the National Center for Artificial Intelligence, which focuses on AI strategy, research and education.
In September 2023, SDAIA introduced an updated version of Saudi Arabia’s first AI legal framework: AI Ethics Principles (Version 2.0). The framework covers data privacy and other ethical considerations for AI development and usage.
In addition, Saudi Arabia’s draft IP Law includes provisions relating to AI, making Saudi Arabia a global leader in taking such an approach.
Southeast Asia
Thailand has introduced two pieces of draft legislation aimed at regulating AI systems as well as promoting and supporting the AI ecosystem. The Draft Royal Decree on Business Operations that Use Artificial Intelligence System takes a risk-based approach similar to the EU AI Act. It prohibits AI systems that employ subliminal techniques, social scoring, access sensitive personal information, or use real-time remote biometric identification in public areas. It has extraterritorial applicability, requiring foreign service providers to comply.
The Draft Act on the Promotion and Support of AI Innovations contains provisions to support the development of AI whilst protecting consumers.
Thailand also has an AI Ethics Guideline.
———–
Vietnam has just issued a Guideline for the Responsible Research and Development of AI Systems. Neither Indonesia nor Vietnam has specific AI legislation. Consideration needs to be given to existing legislation in areas such as data privacy, cyber security, and copyright infringement to manage any legal risks when utilising AI.
How to approach AI law going forward
With legislation around AI rapidly evolving in most markets, Rouse recommends that any IP owner planning to use AI should study relevant laws in each jurisdiction, understand their applicability and any steps that need to be taken to mitigate risks.
When specific AI laws do not exist, it is important to understand how existing IP, marketing and content related laws can be applied towards managing potential brand reputation and regulatory non-compliance risks.
For further information, please contact:
Holly White, Rouse
hwhite@rouse.com