Introduction
Generative AI responds to prompts like “create a strategic marketing plan for our new product launch“. This emerging technology has the potential to revolutionize the way businesses operate as it illustrates how technology can not only support business but also catalyze innovation. Generative artificial intelligence (“AI”) refers to a category of AI algorithms, capable of generating new outputs based on the data they have been trained on. This unique feature allows generative AI to create a wide array of content, including audio, code, images, text, simulations, and videos.[1] Some of the most popular and widely used generative AI are ChatGPT and DALL-E2.
Generative AI presents a number of potential benefits for businesses including improved productivity, greater efficiency, cost savings, and the ability to foster innovation. Nonetheless, there are also potential drawbacks to consider as the use of generative AI may introduce risks related to privacy, security, potential misuse or abuse, and the creation of malicious content. To address these concerns, a forward-looking regulatory framework must be established to ensure responsible and ethical usage of generative AI.
Potential Risks and Concerns Associated With Generative AI
a. Transparency and Accountability in Generative AI
Transparency means that everyone can see and monitor how an AI system operates, how it makes decisions, and how it handles and processes information[2]. Having this clarity is what builds trust in AI, facilitates accountability and ensures its safe usage. This is when clear and comprehensive rules and regulations come into play. Without such rules and regulations governing AI transparency, we run the risk of developing AI systems that unintentionally perpetuate harmful biases, creating mistrust among users or violating privacy and ethical considerations.
b. Protecting Data Privacy and Security
In the absence of adequate security measures, generative AI tools might become susceptible to data breaches, potentially resulting in unauthorized access or disclosure of sensitive user information. This occurs when individuals unintentionally input confidential data into the chatbot of these AI applications without proper redaction of sensitive information i.e., pasting confidential data on AI applications for grammar checks. In May 2023, Samsung had to ban ChatGPT after three separate instances of employees unintentionally sharing sensitive data to the generative AI platform, including confidential source code.[3]
c. AI and Intellectual Property
The existing laws and regulations must be able to keep pace with the rapid developments in AI technology or else it would lead to uncertainties on the issue of ownership of patents, copyrights, and trademarks. When an AI or machine generates a creative work, the question is whether the person who controls and directs the AI’s actions is considered the creator and therefore holds the IP rights to that work. Alternatively, if the AI generates the work independently without any human involvement, who will then become the holder of the IP rights. Can an AI be recognized as a creator with its own IP rights?
Overview of AI Legislation in Malaysia
Establishing AI governance falls under the purview of the Ministry of Science, Technology and Innovation (MOSTI). Pursuant to this, MOSTI has initiated the National Artificial Intelligence Roadmap 2021 – 2025. Additionally, Chang Lih Kang, Minister of Science, Technology and Innovation, has also indicated that there are plans to enact a comprehensive AI Bill.[4] This legislative effort will involve consultations with technology experts, legal professionals, stakeholders, and the public to ensure its robustness and relevance. The proposed AI Bill aims to address various aspects, such as data privacy, raising public awareness about AI use, ensuring transparency and accountability, and managing cybersecurity risk. Importantly, this legislation is designed to strike a balance between managing potential risks and fostering innovation, all while ensuring that AI continues to make positive contributions to the economy and society. However, as of this date, Malaysia does not have a specific legislation dealing with AI governance and any issues arising from the same will be limited to the existing statutes, regulations and industry codes of conduct.
Existing AI-related Laws and Regulations in Malaysia
a. Intellectual Property Laws
The primary issue linked with generative AI revolves around the ownership of the intellectual property it generates. In the context of patent law in Malaysia, the main issue is whether an AI can be considered as an Inventor, under the Patents Act 1983 and Patents Regulations 1986. In a patent application, the person applying for a patent usually becomes the patent owner who holds exclusive rights to the invention and can take legal action against anyone who uses it without permission. The inventor can be the same as the applicant/owner, or they can transfer their rights to someone else. In the context of AI-generated inventions, there’s agreement that the applicant/owner must be a human. However, whether the inventor must be human or not is still uncertain.
Furthermore, whether AI-generated works are safeguarded by the Copyright Act 1987 also remains a grey area. In the context of the Copyright Act in Malaysia, it specifies the requirement for a human author, which makes it highly improbable for copyright to apply to content generated by AI. However, there is a possibility that the output created by AI could be eligible for copyright protection. Therefore, whether the end products qualify for copyright protection would depend on whether they meet the criteria outlined in section 7 of the Copyright Act, which involves assessing whether sufficient effort has been expended to make the work original in character.
b. Data Privacy Laws
Personal Data Protection Act 2010 (“PDPA”) is relevant as AI usage generally requires collection and processing of personal data in regards to commercial transactions. The seven Personal Data Protection Principles are enshrined in the PDPA 2010 in which a data user who is defined as a person who processes any personal data, shall have to comply with. Therefore, personal data processed by a data user using AI will nevertheless have to be processed in accordance with such principles.
Take the general principle for example, which generally requires consent as a condition for processing data. This likely means that the data user must ensure that the AI used will not process personal data beyond the scope of the data subject’s consent. Other principles relating to security and integrity of personal data are also of direct relevance where AI is used to process personal data. Compliance with the PDPA will likely minimise exposing the data user to liabilities when using AI to process personal data.
c. Employment Laws
When AI technology is used to make the employees redundant, employers must note that the dismissals in Malaysia must be with just cause and excuse, employers need to be able to explain what led to the dismissal, hence they need to know how the algorithm came to its decisions, why certain employees were selected and others were retained. In essence, employers must be able to pinpoint the exact data point used by an AI and this would be near impossible given that AIs have complicated algorithms and use multiple data points.
In determination of poor performance, employers must show that the employee was given sufficient notice/warning highlighting their poor performance and that the employee was given a reasonable opportunity to improve their work performance.
d. Contract Laws
AI-based contracts may potentially be enforceable under the Contracts Act 1950 if the elements to form a valid contract are satisfied (offer, acceptance, consideration and intention to create legal relations). This is provided there are no vitiating factors to render the contract void or voidable.
Conclusion
As Malaysia has yet to establish any legislation or framework to regulate AI applications unlike China and the European Union, Malaysia is still exploring for policy measures. We are of the view that generative AI, alongside AI in general should be included in the Malaysian framework.
The Science, Technology and Innovation Ministry takes the stance that awareness is important and stresses the need to develop resources and public awareness campaigns on the basics of AI and how it is being used to generate content, including the understanding the biases that can be inherent in AI, and the distinction between human-produced and AI-produced content[5]. Hence, there has been suggestions from the minister on the possibility of the legislation to include provisions for educating the public about AI and promoting research and development in the field.
This in turn helps people to make better choices and decisions, encouraging them to be more critical about the media they consume and enabling them to participate in discussions on AI rules and guidelines, ultimately leading to a more cautious and aware community which reduces the impact of AI-generated misinformation. There should also be a requirement for content produced entirely or in part by AI to be clearly identified.
The minister is also of the view that it is important to balance the need to manage risks with the potential for innovation of AI as a key strategic enabler in developing its economy and improving the quality of life of citizens, this is to ensure that innovation and investment are not stifled.
For further information, please contact:
Heu Wen Yen, Azmi & Associates
Heuwenyen@azmilaw.com
- McKinsey & Company. “What Is Generative AI?” McKinsey & Company, January 19, 2023. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai.
- Lawton, George. “AI Transparency: What Is It and Why Do We Need It?” Tech Target, June 2, 2023. https://www.techtarget.com/searchcio/tip/AI-transparency-What-is-it-and-why-do-we-need-it.
- Goud Muthyam, Varaprasad, Gopinath Durairaj, Saritha Chadalavada, and Eugene Kaganovich. “Data Protection on the Internet: Data Leakage Prevention for ChatGPT, Generative AI, and Shadow ITVaraprasad Goud Muthyam.” Look Out, 2023. https://www.lookout.com/blog/chatgpt-data-leakage.
- Ariff, Syed Umar. “Law on AI Being Studied.” The Star. July 23, 2023. https://www.thestar.com.my/news/nation/2023/07/23/law-on-ai-being-studied.
- The Straits Times, “Malaysia mulls over enacting law on AI”. July 24, 2023. https://www.straitstimes.com/asia/se-asia/malaysia-mulls-enacting-law-on-ai.