On February 18, 2023, Elon Musk tweeted that he would create TruthGPT, an alternative to the AI chatbot ChatGPT. Unlike ChatGPT, which inevitably generates false information, TruthGPT aims to provide the ‘unbiased truth’1.
Both ChatGPT and TruthGPT are at the forefront of generative artificial intelligence (“generative AI”) models, creating content and ideas, including dialogue, stories, images, movies and music. In some ways, like the monster created by Frankenstein, the title character in Mary Shelley’s novel Frankenstein2, human beings must learn to face and control generative AI after creating it. The future has come, and in this article, we take a quick look at the legislation and regulations of generative AI in the major markets.
I. Overview of Legislation Governing Generative AI
1. China
On April 11, 2023, the Cyberspace Administration of China (CAC) issued the Measures for Administration of Generative Artificial Intelligence Services (Draft for Comment) (the “Draft CAC Measures”) in order to promote the healthy development and regulate the application of generative AI technology. According to the Draft CAC Measures, the provision of generative AI services to the public in China shall meet the requirements of the relevant laws and regulations, social morality, public order and good customs. The Draft CAC Measures provide that governmental supervision and legal liability shall be imposed pursuant to three basic PRC laws in the field of cyber security and data privacy protection, namely, the Cyber Security Law, the Data Security Law and the Personal Information Protection Law.
Prior to the release of the Draft CAC Measures, two ministerial regulations related to generative AI had been promulgated, namely, the Provisions on Administration of Algorithmic Recommendation in Internet Information Services (the “Algorithmic Recommendation Provisions”) and the Provisions on Administration of Deep Synthesis in Internet Information Services (the “Deep Synthesis Provisions”). The Algorithmic Recommendation Provisions touch on the algorithms for information generation and synthesis and require that the information generated and synthesized by algorithms shall be prominently marked and no Internet news information service provider shall generate or synthesize fake news information. The Deep Synthesis Provisions set forth, in more detail, the requirements applicable to deep synthesis services with functions for generating or significantly changing the contents of information, such as smart dialogue, smart writing, facial image generation, facial image manipulation and posture manipulation, including (i) alerting the public in a prominent way, (ii) establishing and refining the management rules regarding user registration, reviewing algorithm mechanisms, reviewing the ethics of science and technology, reviewing information release, data security, protection of personal information, anti-telecom fraud and emergency response, and (iii) strengthening the management of data and data training technology.
2. US
Shortly after the Draft CAC Measures were released in China, the National Telecommunications and Information Administration (NTIA) of the US Department of Commerce issued the AI Accountability Policy Request for Comment3 (the “Request for Comment”) on April 13, 2023, with a view to brainstorming and formulating accountability policies governing AI systems, including generative AI systems. A key question was, “Given the likely integration of generative AI tools into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with trustworthy AI standards?” The NTIA pointed out in the Request for Comment that traditional AI audits could no longer cover the threats of generative AI, such as the distortion of communication through misinformation, disinformation, deep fakes, and privacy invasion. As required in the executive order signed by US President Biden in February 2023, federal government departments should root out bias in the design and use of new technologies, such as artificial intelligence and protect the public from algorithmic discrimination and threats.4
Most of the current US federal legislation and administrative documents regarding AI are not specific to generative AI. The National Artificial Intelligence Initiative Act of 2020 states that people should be alert to AI systems that become self-aware or uncontrollable and develop trustworthy AI systems. According to the Act, the National Institute of Standards and Technology (NIST) is responsible for establishing comprehensive AI development specifications, including the second edition of the Risk Management Framework that it released in August 2022. In October 2022, the Office of Science and Technology Policy of the White House issued the Blueprint for an AI Bill of Rights, and the algorithmic discrimination protections specified therein include proactive equity assessments as part of the system design, the use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and independent evaluation and plain language reporting in the form of an algorithmic impact assessment.
3. EU
The current AI legislation in the EU mainly focuses on traditional AI models, but it has gradually touched on generative AI.
Emphasized in the White Paper On Artificial Intelligence – A European approach to excellence and trust5 issued by the European Commission on February 19, 2020, is that a key element of the legal framework for AI in Europe is to create an “ecosystem of trust” and protect fundamental rights (the Charter of Fundamental Rights of the EU) and consumer rights.
The Proposal for Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts6 published in April 2021 contains provisions regarding prohibited AI, high-risk AI systems, AI systems intended to interact with human beings, and AI systems used to generate or manipulate images, audio and video content. To deal with the problems brought by ChatGPT, the Proposal will be revised again, and some definitions and types of regulations will be changed, i.e. adding direct regulation of the deployers and users of “large generative AI models”, including (1) regulations in respect to the issue of transparency; (2) risk management; (3) application of a non-discrimination clause to the developers of “large generative AI models”; and (4) content review rules7.
The European Commission also published a proposal AI Liability Directive on September 28, 2022, to introduce new rules specific to the damage caused by AI systems, intending to ensure remedies for victims of AI.
II. What are Regulators Dealing with?
There have been many discussions about the threats of generative AI, for instance false, discriminatory and biased information, data privacy, public security, and intellectual property ownership issues. Such threats are mainly attributable to the nature of data sources, and algorithms and models.
1. Legitimacy of Data Sources
Machine learning, a very important part of AI model training, refers to both supervised and unsupervised learning. Supervised learning is where the algorithm is given a set of training data, and supervised models learn from ground truth data that was manually labeled, and the direction and the goal of learning is specified. Unsupervised learning is where the algorithm is given raw data that is not annotated, and the algorithm is not explicitly told what to do with it and must learn how to make predictions by itself. Unsupervised learning is suitable to perform specific tasks on distinct data types, for example, fraud detection or financial analysis, that requires identifying a hidden structure in unlabeled data.8
For example, in the research and development of materials that are used in car manufacturing, the use of generative design to reduce the number of parts and the weight of cars is helpful to develop more fuel-efficient cars. The system can generate a variety of design alternatives according to the input parameters, such as the production time, materials and manufacturing costs, and the designers will then make design optimizations after research and comparison9. In the process of generating design alternatives, the generative AI model may have to search and learn from various academic articles and research reviews on the Internet, make deductions and conduct experiments according to the search results, detect mistakes in its learning and deductions and make corrections by itself, so as to meet the requirements of the input parameters. In searching and crawling information on the open Internet, the generative AI model may bypass the privacy policy, user agreement, intellectual property notice, and password protection of relevant webpages, thereby rendering the data source illegitimate or infringing. In order to deal with this problem, many companies and non-profit organizations have developed their own solutions. For example, Datasheets for Datasets, a tool developed by Microsoft to ensure the legitimacy of data sources, can document the datasets used for training and evaluating machine learning models and monitor dataset motivation, composition, collection, pre-processing, labeling, intended purpose and maintenance.
To protect the rights and interests of data subjects, the complexity and black-box nature of algorithms make it impossible for developers or service providers to fully explain the process and purpose of data processing to data subjects. Unlike modifying or deleting the personal information of a data subject from a database or server, modifying or deleting the personal information of a data subject from an AI model may undermine the utility of the model itself, so a data subjects’ right of modification and deletion is difficult to protect.10
2. Transparency, Explainability and Accountability of AI Algorithms and Models
Many factors have a bearing on the transparency, explainability and accountability of algorithms and models.
In order to pursue their own interests, companies develop black box models that are not transparent on the grounds of algorithm security and confidentiality, in order to avoid being accused of program loopholes, improper methods, violation of laws and regulations and other issues11. In March 2023, the Center for Artificial Intelligence and Digital Policy submitted a complaint against ChatGPT to the US Federal Trade Commission (FTC), pointing out that as AI models become increasingly profitable, developers may no longer be willing to make their AI models public12.
Data-driven algorithms mainly use deep neural network technology, and a key part of an algorithm is automatic generation upon the automatic learning of data, instead of artificial design, and it is difficult for human beings to understand the specific learning process. Human beings may understand the results provided by the algorithm but may not know or understand how the algorithm arrived at that result.13
Taking autonomous driving as an example, generative AI models can generate any driving scenario imaginable by human beings, even scenarios in extreme conditions. The emergent abilities of large generative AI models (emergence in large models means that they develop unexpected new abilities as they scale) will contribute to the models’ “chain of thought” ability, thus demonstrating a complex reasoning ability similar to that of human beings. The Reinforcement Learning from Human Feedback (RLHF) of generative AI makes the autonomous driving system correct itself and constantly make progress. Will the reasoning ability and self-improvement of the autonomous driving system eventually give rise to a driving decision-making mechanism that may become uncontrollable by human beings and jeopardize driving safety? This will bring great challenges to the legislation and regulation of generative AI and related application scenarios14. It reminds us of another proposal published by the European Commission regarding driving systems15 in April 2021, which provides for the liability of the manufacturers, agents, importers and distributors of the machinery products related to driving systems.
III Comments on the Regulation of Generative AI in Major Markets
1. China
According to the current legislation and the Draft CAC Measures, the cyberspace administration authorities play a leading role and work with telecommunications, public security bodies, market regulation administrators and other authorities in the regulation of generative AI. When generative AI tools are later integrated into downstream products such as entertainment, healthcare and smart cars, the authorities in charge of the regulation of related industries may also be involved in the regulation of generative AI pursuant to relevant laws and regulations.
In terms of regulatory focus areas, the Draft CAC Measures is only applicable to those products made available to the public in China, and, for the time being, in order to encourage innovation, the Draft CAC Measures do not impose many restrictions on generative AI products that are still at the laboratory training or non-public trial operation stage. In light of the threats posed by the nature of data sources, algorithms and models as discussed above, it is easy to understand the intention of the wording in the Draft CAC Measures, such as ensuring that data is legitimately obtained and is true and objective; safeguarding the legitimate rights and interests of the data subjects of personal information; disclosing data descriptions, manual labeling rules, underlying algorithms and technical systems; requiring that the generated content is not false, discriminatory or infringing; adopting content filtering methods and optimizing training methods; and conducting requisite government procedures such as Internet information service security assessments and algorithm filings.
However, with respect to those generative AI products that are at the laboratory training stage or non-public trial operation stage, even though there are not many legal limitations on such generative AI products, developers should still abide by the laws and regulations related to cyber security and data privacy protection and make adequate compliance plans in order to later release their products onto the market.
2. US
Some of the US government agencies involved in the development and regulation of AI systems (including generative AI systems) are OSTP, the National Artificial Intelligence Initiative Office (NAIIO), the NIST and the NTIA. The sectoral regulation of AI systems is reflected in the official documents of different governmental agencies issued in the past 10 years, such as the Federal Reserve seeking views on the use of AI by financial institutions, including machine learning, and the Artificial Intelligence and Algorithmic Fairness Initiative launched by the Equal Employment Opportunity Commission (EEOC).
In terms of regulatory focus areas, policymakers in the US like other countries try to strike a balance between encouraging the development of generative AI and ensuring the trustworthiness of AI. Among the challenges facing policymakers are tradeoffs between trustworthy AI goals, barriers to implementing accountability mechanisms, complex AI lifecycle and value chains, and difficulties with standardization and measurement. At the end of April 2023, the FTC, the Consumer Financial Protection Bureau (CFPB), the Civil Rights Division of the Ministry of Justice and the EEOC issued a joint statement specifying three regulatory focus areas in connection with the potential discrimination risks of AI: datasets that may be biased, transparency of large models, and flawed assumptions in the system design16.
In terms of regulatory measures, the Request for Comment issued by the NTIA on April 13, 2023, mentions conformity assessments, bias audits, algorithmic impact assessments, and “black box” adversarial audits. Our initial understanding is that such technical measures are also applicable to the generative AI systems that are at the laboratory training stage.
3. EU
In 2018, the European Commission set up a High-level Expert Group on Artificial Intelligence (“AI HLG”) according to the European Strategy for Artificial Intelligence. The AI HLG is divided into two working groups, which focus on the draft AI ethics guidelines and the policy and investment strategy respectively17, and the AI HLG has the general objective to support the implementation of the EU strategy on AI. In addition, the AI HLG engages with the European AI Alliance and EU member states in the regulation of AI.
Although there has been no EU legislation specific to generative AI, the thoughts of the EU in this respect are noteworthy. The AI HLG proposed in a document published in July 202018 that AI should be subject to different degrees of regulation in different sectors (such as the public sector, healthcare, manufacturing and the Industrial Internet of Things), and this sectoral perspective is similar to the regulation of generative AI based on different risk levels under the proposed EU Artificial Intelligence Act.
In addition to sectoral regulation, there will be stage-based regulation. The European Commission is contemplating that in future legislation the liability should be allocated to the person most suitable for risk control, for example, the developer of an AI system when the system is at the development stage; the system deployer after the AI system is put into use; the claimants acting on behalf of end users; and victims of the damage caused by AI systems. Similar views have also been put forward in the EU academic community, according to which, developers who pre-train models; deployers who fine-tune them for specific use cases; professional and non-professional users, who actually generate the output in these use cases, and recipients of output, such as consumers exposed to AI-generated advertisements or products, should be subject to different levels of regulation and receive different levels of protection19. Such regulation would require a careful balance between technological innovation and the protection of user safety, and we are very much looking forward to hearing the public reaction after the promulgation of the relevant legislation.
1.https://truthgpt.gitbook.io/truthgpt-whitepaper/ai-chatbot/overview
2.Frankenstein is a novel by Mary Shelley first published in 1818. The book tells the story of Victor Frankenstein, a biologist who creates a monster from corpses and brings his creature to life. The monster was initially kindhearted, but became distorted due to people’s disgust of him and he ran amok and killed innocent people. Eventually the monster went to ruin together with its creator Frankenstein.
3.https://www.federalregister.gov/documents/2023/04/13/2023-07776/ai-accountability-policy-request-for-comment#footnote-80-p22439
6.https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
7.https://mp.weixin.qq.com/s/SqvHM8nQbwqJ-ecQH8F7Dg 《ChatGPT引发欧美关于人工智能监管的辩论》(ChatGPT Sparks Debate on AI Regulation in Western Countries)
8.https://viso.ai/deep-learning/ml-ai-models/ The Ultimate Guide to Understanding and Using AI Models (2023)
9.https://mp.weixin.qq.com/s/FoF18PXpNtBOGLNXuyXGaA Mixlab 技术前沿:《AI 驱动的生成式设计,如何应用于汽车智能建造?》(Mixlab Frontier – How Can AI-Driven Generative Design be Applied to Intelligent Automotive Construction?)
10.https://iapp.org/news/a/generative-ai-privacy-and-tech-perspectives/ Generative AI: Privacy and tech perspectives Katharina Koerner, CIPP/US
11.https://www.samr.gov.cn/wljys/ptjjyj/202112/t20211210_337980.html 《算法黑箱基本概念及成因》国家市场监督管理总局网络交易监督管理司 (Basic Concepts and Causes of Algorithmic Black Boxes, Department of Supervision and Management of Network Transactions, State Administration of Market Supervision and Administration)
12.https://www.caidp.org/cases/openai/
13.Same as footnote 11.
14.https://www.cnii.com.cn/rmydb/202304/t20230423_465226.html 《以ChatGPT为代表的生成式AI在自动驾驶领域的应用》作者:赛迪研究院刘胜语、赫荣亮 (Generative AI in the Field of Autonomous Driving Represented by ChatGPT, by CCiD Group, Shengyu Liu, Rongliang He)
15.https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0202 Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on machinery products
16.https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems
17.https://ec.europa.eu/futurium/en/system/files/ged/concept_note_on_the_ai_hlg_0.pdf CONCEPT NOTE The High-Level Expert Group on Artificial Intelligence
18.Sectoral Considerations on the Policy and Investment Recommendations https://futurium.ec.europa.eu/sites/default/files/2020-07/Sectoral%20Considerations%20On%20The%20Policy%20And%20Investment%20Recommendations%20For%20Trustworthy%20Artificial%20Intelligence_0.pdf
19.https://blogs.law.ox.ac.uk/blog-post/2023/03/regulating-chatgpt-and-other-large-generative-ai-models#:~:text=The%20EU%20is%20at%20the%20forefront%20of%20efforts,cover%20AI%20%28Digital%20Services%20Act%2C%20Digital%20Markets%20Act%29. Regulating ChatGPT and Other Large Generative AI Models, Philipp Hacker, Andreas Engel, Marco Mauer