The significance of artificial intelligence (AI) is rapidly increasing worldwide, and Southeast Asia is no exception, as it plays a leading role in the technological development of many industries. AI has already proven its importance for driving business growth in areas such as e-commerce, finance, and healthcare, but its remarkable potential also raises concerns around privacy. As AI systems are designed to collect and process large amounts of data to improve their operation, it is necessary to balance the development of technology with the protection of individuals’ privacy.
Current Frameworks in Southeast Asia
This concern has been on regional policymakers’ agendas for many years. The ASEAN Framework on Personal Data Protection, which was adopted in 2016, is not legally binding and has no enforcement mechanism, but it serves as a guide for ASEAN member states in developing their own data protection laws and regulations.
Domestic data privacy laws are currently in force in five ASEAN member countries—Indonesia, Malaysia, the Philippines, Thailand, and Singapore—while Vietnam’s Personal Data Protection Decree is scheduled to take effect on July 1, 2023. This presents a challenge for ASEAN members, as adopting AI-related technology can further complicate data protection efforts due to the amount of personal data AI systems collect, as well as the complexity of the data used to train the AI algorithm.
Some ASEAN members have also made progress in regulating AI. For instance, Singapore released the Model AI Governance Framework in 2019 and launched the AI Governance Testing Framework and Toolkit in 2022—the world’s first such framework. Similarly, Thailand issued the Artificial Intelligence Ethics Guideline in 2019 to help government agencies in the development, promotion, and use of AI, and in 2023 adopted the Thailand Artificial Intelligence Guidelines to help the private sector develop AI-related work. These guidelines primarily focus on principles and ethics in developing AI-related technology, but lack a step-by-step implementation process that connects with privacy laws. Despite these early steps by some countries in ASEAN, there are no regional policies or consensus frameworks on how to implement and regulate AI in accordance with privacy laws in ASEAN member countries.
Legal Risks
If AI-related technology is developed without consideration for data protection, there is a risk of breaching personal data and affecting numerous data subjects, potentially resulting in mass litigation. Moreover, the lack of robust privacy laws and frameworks in many ASEAN member countries, coupled with the growing use of AI-related technology, also increases the risk of legal liabilities for companies that make use of this increasingly common technology.
In the event of a data breach or misuse of personal data, affected individuals may seek legal recourse against the companies that collected and processed their personal information. Such legal actions can result in significant financial and reputational damages for businesses, highlighting the need for effective data protection regulations and AI-related technology frameworks in ASEAN countries.
Technology companies with connections to developing AI systems are especially vulnerable. With the vast amount of data required for developing AI systems, these companies will face the challenge of lawfully collecting and processing data from a huge range of sources and data subjects.
Outlook
As AI-related technology continues to evolve and play a crucial role in the growth of many industries in Southeast Asia, it is important to ensure that its development is balanced with the protection of individuals’ privacy. While some ASEAN members have made progress in adopting AI regulations, more needs to be done to enforce data privacy laws and develop consensus frameworks for regulating AI in accordance with privacy laws. Such efforts will not only help protect individuals’ privacy but also mitigate legal risks associated with the use of AI-related technology. ASEAN member countries must continue to work together to achieve a balance between technological development and data protection in support of sustainable and ethical innovation for our digital future.