What You Need to Know
- Key takeaway #1Voluntary commitments established to set ethical, regulatory, and legal standards for AI systems while maximizing benefits and mitigating risks.
- Key takeaway #2Binding AI legal frameworks are seen as increasingly crucial for safeguarding individual rights and ensuring safe technology development.
- Key takeaway #3Governmental actions, including executive orders and legislation, are being pursued globally to advance safe and accountable AI, with the voluntary commitments playing a significant role in these efforts.
On July 21, 2023, the Biden administration announced that seven companies leading the development of artificial intelligence (AI) — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — have made voluntary commitments, which the companies agreed to undertake immediately, to help move towards safe, secure, and transparent development of AI technology. The goal of the voluntary commitments, or the “AI Agreement” as it is informally dubbed, is to establish a set of standards that promote the principles of safety, security, and trust deemed fundamental to the future of AI.
The AI Agreement is composed of commitments organized by three key principles:
- Ensuring Products are Safe Before Introducing Them to the Public – The companies committed to test internal and external security before releasing an AI system, and to share information across the industry with governments, civil society, and academia on managing AI risks;
- Building Systems that Put Security First – The companies committed to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased “model weights,” which are described in the administration’s release as “the most essential part of an AI system,” and to facilitate robust third-party discovery and reporting of vulnerabilities in AI systems; and
- Earning the Public’s Trust – The companies committed to develop robust technical mechanisms that notify users when content is AI-generated, to publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, to prioritize research on societal risks that AI systems can pose (g., bias, discrimination, and privacy infringement), and to develop and deploy advanced AI systems to help address society’s greatest challenges.
As part of its announcement, the Biden administration noted that the United States continues to engage with its allies and partners to harmonize AI-related efforts and had consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
Key Takeaways
- The voluntary commitments are another step in the ongoing process of establishing ethical, regulatory, and legal standards that seek to mitigate the risks of AI systems while simultaneously harnessing their extensive benefits. The announcement comes as governments around the world try to respond to the rapid development and deployment of AI, and as concerns with the potential harms of the technology continue to be at the forefront of the discussion by AI developers, users, and policy makers.
- In an era where the world has never been more interconnected, establishing binding AI legal frameworks are considered by many stakeholders to be increasingly crucial – both for the protection of individual rights and freedoms, as well as for the ability of those developing and implementing the technology to do so safely and effectively. The AI Agreement will likely serve as a foundation for future engagement with foreign governments and policy makers, as companies developing and deploying AI will be required to have policies and procedures in place that comply with other laws and regulations, such as the existing General Data Protection Regulation (GDPR) and the upcoming AI Act in the European Union.
- While the voluntary commitments will help to shepherd the responsible innovation of AI, further governmental action is sure to come. The Biden administration has already confirmed that it is in the process of drafting an executive order to advance safe, secure, and trustworthy AI and is pursuing legislation to establish a legal and regulatory regime to further control the technology. In Congress, U.S. Senator Chuck Schumer (D-NY) has introduced the SAFE Innovation Framework, which calls for security and accountability in AI systems. Globally, the G7 countries recently announced the Hiroshima AI Process during this year’s Summit in Japan, which will focus in part on how to harmonize various countries’ approaches to AI governance. Regionally, Singapore’s Minister for Communications announced last month the launch of the AI Verify Foundation to harness the collective power and contributions of the global open source community to develop AI testing tools for the responsible use of AI. And last month, the United Kingdom announced that it will host an international summit on AI safety before the end of the year. The voluntary commitments secured by the Biden administration from seven leading AI companies are likely to feature prominently in all of these discussions.
For further information, please contact:
Michael K. Atkinson, Partner, Crowell & Moring
matkinson@crowell.com