In brief
- The Australian Government has launched a public consultation on its proposal to introduce mandatory guardrails around high-risk AI systems and models. Consultation closes 5pm AEST on Friday 4 October 2024.
- In the interim, the Government has also released voluntary standards (which mostly mirror the mandatory guardrails) that guide organisations on how to safely and responsibly develop and deploy AI systems in line with existing international standards and position themselves for a smoother transition once the mandatory requirements come into effect.
Background
On 5 September 2024 the Australian Federal Government (the Government) released two key documents as part of its broader agenda to promote safe and responsible use of artificial intelligence (AI) in Australia:
- a proposal paper for introducing mandatory guardrails for AI in high-risk settings (the Proposal); and
- the Voluntary AI Safety Standards (the Voluntary Standards).
The Proposal sets out:
- a proposed definition of high-risk AI;
- ten proposed mandatory guardrails around the development and deployment of high-risk AI in Australia in high-risk settings; and
- regulatory options for mandating these guardrails (e.g. adapting existing regulatory legislation, or creating new frameworks (potentially through framework legislation or by introducing an Australian AI Act)).
The Proposal is now open to public consultation, closing 5pm AEST on Friday 4 October 2024.
The Voluntary Standards provide practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI. These standards may be used immediately, and are intended to “give businesses certainty ahead of implementing mandatory guardrails” (Minister press release).
These documents follow the Government’s latest interim response to the Safe and Responsible AI in Australia discussion paper (Initial Consultation) in January 2024, which called for the development of a regulatory environment that builds community trust and promotes AI adoption. Other key recent initiatives to note include the National AI Assurance Framework (which provides for a nationally consistent approach to AI assurance across federal, state and territory governments) and the Policy For Responsible Use of AI in Government (which requires federal agencies to appoint accountable officials to implement AI policies and provide transparency statements outlining their approach to AI adoption), though they target the public sector.
Mandatory AI guardrails proposal
Risk-based approach and “high-risk AI”
The Proposal focuses on “high-risk AI” as the subject of the proposed mandatory guardrails.
This particular approach is based on the Government’s observations (as advised by an expert advisory group set up in the Initial Consultation) that “AI has characteristics, as distinct from other types of software programs, that warrant a specific regulatory response” and that the various existing and new risks amplified by AI (e.g. bias and discrimination, misinformation, privacy breaches, etc) call for a “risk-based approach, with a focus on ex ante (preventative) measures”. The Proposal notably draws upon examples seen in the European Union and Canada (Proposal, pg 11-12, 16).
In terms of defining “high-risk AI”, the Proposal suggests two broad categories where the mandatory guardrails would apply:
- “High risk” arising from the proposed uses of an AI system (i.e. “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”) or uses of a general-purpose AI model (GPAI) that are known or foreseeable (i.e. risk has been determined with reference to the context in which that AI system will be used or the foreseeable applications for the AI system or GPAI model).
- “High risk” relating to “advanced, highly capable GPAI models” where all possible applications and risks cannot be foreseen (i.e. the risk lies in the potential for these models to be used – or misused – for a wide range of purposes with emergent risks). A GPAI model is “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems.”
In relation to the first category, the Proposal sets out a principles-based definition as follows:
In designating an AI system as high-risk due to its use, regard must be given to:
- The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations
- The risk of adverse impacts to an individual’s physical or mental health or safety
- The risk of adverse legal effects, defamation or similarly significant effects on an individual
- The risk of adverse impacts to groups of individuals or collective rights of cultural groups
- The risk of adverse impacts to the broader Australian economy, society, environment and rule of law
- The severity and extent of those adverse impacts outlined in principles (a) to (e) above.
The severity and extent of impact of the identified risks will be weighed up to determine if the system is in fact ‘high risk’.
On the second category, the Proposal does not propose a separate risk criteria or threshold for GPAI models, but rather suggests the application of the mandatory guardrails to the development and deployment of all GPAI models (as distinct from the specific application or use of a GPAI model, as may be covered in category one above) given that they can be applied in contexts they were not originally designed for (i.e. unforeseeable risks). That said, the Proposal does acknowledge that “[s]ince most highly capable GPAI models are not currently developed domestically, Australia’s alignment with other international jurisdictions is important to reduce the compliance burden for both industry and government and enables pro-innovation regulatory settings.” (Proposal, pg 29).
Developers and deployers
The Government observes that both AI developers and deployers will need to adhere to the guardrails. The Government notes that responsibility for the guardrails should be assigned based on which parties are most capable of managing the risks at each development stage, considering factors like access to vital information such as training data and the capability to effectively intervene and modify an AI system.
For entities deploying AI from a supplier, it is worth noting that the Voluntary Standards (see below) include high level procurement advice to assist a deployer align with the closely aligned Voluntary Standards.
Proposed Mandatory Guardrails
The Proposal proposes ten guardrails requiring organisations who are developing or deploying high-risk AI (noting categories 1 and 2 above) to:
- establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- establish and implement a risk management process to identify and mitigate risks;
- protect AI systems, and implement data governance measures to manage data quality and provenance;
- test AI models and systems to evaluate model performance and monitor the system once deployed;
- enable human control or intervention in an AI system to achieve meaningful human oversight;
- inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
- establish processes for people impacted by AI systems to challenge use or outcomes;
- be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- keep and maintain records to allow third parties to assess compliance with guardrails; and
- undertake conformity assessments to demonstrate and certify compliance with the guardrail.
In addition to providing explanation for each guardrail, the Proposal states that:
- these guardrails are designed to work in complement to (rather than exempt) Australia’s existing legal frameworks;
- these guardrails will need to be adaptable to remain fit for purpose as AI technology continues to evolve and become more autonomous; and
- it is critical that AI developers and deployers engage openly with stakeholders across the AI supply chain and the AI lifecycle. This engagement should begin at the earliest possible stage and should seek to identify and document which key stakeholder groups may be impacted by use of the system.
Regulatory Options
The Proposal sets out three options for implementing the above mandatory guardrails:
- Domain-Specific Approach: Adapting existing regulatory frameworks to include the proposed guardrails.
- Framework Approach: Introducing new framework legislation with amendments to existing laws.
- Whole of Economy Approach: Enacting a new cross-economy “Australia AI Act”.
The Proposal discusses the advantages and disadvantages of each option and invites public commentary on them.
Separate to these options, the Proposal also states the Government will “continue to strengthen and clarify existing laws so it is clearer how they apply to AI systems and models” (e.g. privacy, consumer protection, intellectual property, anti-discrimination, competition) (Proposal, pg 43).
Voluntary AI standards
Along with the Proposal, the Government also released the Voluntary Standards which replicate the mandatory guardrails, except for the 10th voluntary standard which, instead of focusing on conformity assessments (as was in the 10th mandatory guardrail), focuses on “stakeholder engagement” to “evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness”.
Further, the Voluntary Standards:
- are designed to help adopting organisations to create a foundation for safe and responsible AI use, and comply with any potential future regulatory requirements in Australia;
- provide procurement guidance to entities to understand how to work with suppliers of AI models or systems to ensure alignment with the Voluntary Standards;
- are not intended to be one-off activities. Instead, they are ongoing activities for organisations. The standards may contain organisational-level obligations to create the required processes and system-level obligations for each use case or AI system;
- promote interoperability and reflect emerging international practices by aligning with international standards including ISO/IEC 42001:2023 and the US National Institute of Science and Technology AI Risk Management Framework 1.0; and
- are only in their first iteration and will be updated over time.
What this means for you
Together, the Proposal and Voluntary Standards signal the Government’s intention to provide regulatory clarity and certainty for those developing AI models and systems and to start to empower organisations to safely manage their use of this emerging technology
Although the mandatory guardrails are still under consultation, organisations should strongly consider adopting the Voluntary Standards now to give themselves a head-start in building their internal capability to responsibly manage innovation using AI, which is not going away. Organisations should also monitor updates to the Voluntary Standards as it is likely they will mirror any amendments made to the mandatory guardrails during the consultation process.
Even if the mandatory guardrails do not come into force, they broadly reflect existing international practices which organisations should be following anyway to ensure safe and responsible development and deployment of AI (alongside with robust data governance, privacy measures, and cybersecurity protocols). Adopting such practices may lend itself to building consumer trust and a competitive advantage in market.
For further information, please contact:
Julian Lincoln, Partner, Herbert Smith Freehills
Julian.Lincoln@hsf.com