What do you need to know?
Artificial Intelligence (AI) is rapidly transforming the business landscape, with significant growth projected in the coming years. This presents both opportunities for Australian businesses alongside potential regulatory and dispute risks.
The pace of technology development has left governments and regulators rushing to keep up. So, while technology regulation continues to develop to respond in a more focused way to threats surfacing from the use of technology and AI, we can expect regulators and plaintiff firms to deploy the range of tools already in their arsenal, including bread and butter claims like misleading and deceptive conduct, to pursue action for any perceived bad consumer or shareholder outcomes arising from the use of AI. One particular species of regulatory and class action risk that we see on the near horizon is claims for AI washing. Exaggerating AI capabilities to attract investment and market interest can lead to liability for misleading conduct under existing laws.
Beyond regulatory and class action risk, we also see breach of confidentiality through use of AI products as another area of risk that should be closely monitored. You can also explore more on the risks of disputes that can arise in relation to copyright and patent infringement in our IP in AI series.
These cases will raise new challenges – if the AI truly is a black box, how will the Court determine what the AI did and why, and whether anyone within a corporation was or could have been aware of the problem that was about to arise?
AI and its ever-increasing use
AI encompasses a range of technologies capable of performing tasks that typically require human intelligence. According to a CSIRO report from May 2023, 68% of Australian businesses have already implemented AI technologies, with another 23% planning to do so within the next year.
AI technologies can be categorised based on their level of human intervention:
- Assisted AI: supports administrative functions without making decisions (e.g. smart TV recommendations based on viewing habits);
- Augmented AI: enhances human decision-making without making fully autonomous decisions (e.g. content generators like ChatGPT); and
- Autonomous AI: learns and makes decisions independently, though this is still relatively rare.
What AI systems have in common is that they will all be fed ‘input data’ or instructions, and for augmented and autonomous AI, the system will ‘learn’ by itself from the responses it receives, without being expressly programmed to do so. And this raises what is commonly referred to as the black box problem. This is where a machine learning model might deliver an output without revealing all of the inputs, given the complex, high-dimensional patterns used in the model.
Will the current legal landscape respond to any perceived consumer harm or other alleged “bad” AI outcomes?
In short, yes.
- Plaintiff law firms and regulators will seek accountability: if there is a perception that harm has occurred for consumers, you can expect that regulators and plaintiff law firms will not simply walk away because “my robot did it” – the ACCC has said so expressly and experience with ASIC shows that the use of technology by financial services providers does not impede investigations or enforcement actions. In fact, in the current environment, perceived poor outcomes through the use of AI by companies is likely to attract greater regulatory scrutiny and enforcement action.
- Existing Legal Frameworks can respond: numerous existing causes of action may apply to perceived negative outcomes arising from AI use. Regulators and plaintiff firms will utilise a range of tools to pursue actions they deem worthwhile, regardless of whether the issue was AI or system generated.
An example is the Robodebt Scheme. An automated debt recovery system miscalculated debts, leading to a class action that resulted in significant compensation for affected individuals, with the causes of action that formed the basis of the plaintiff’s claim being negligence and unjust enrichment.
If AI is used for automated decision-making and an alleged or perceived negative outcome occurs, even under the current legal framework, regulatory intervention is possible and, if consumer harm is involved, a class action may follow.
AI washing
Class action risk might also arise in relation to what is known as AI washing.
AI washing refers to the practice of exaggerating AI capabilities to attract investment and market interest. It is not dissimilar to recent cases of greenwashing in climate disclosure cases.
This can lead to potential liability for misleading conduct under existing technology-neutral laws, such as the ASIC Act, Corporations Act, and Australian Consumer Law. The black box issue exacerbates these concerns.
We have already seen an uptick in regulatory and class action activity globally regarding false AI claims. The U.S. Securities and Exchange Commission (SEC) has taken the lead, conducting a market sweep in 2023 and 2024 to review investment advisers’ use of AI and penalising three firms for misleading AI statements. The SEC plans to increase scrutiny of financial firms’ use of AI in 2025, with this year on track for the largest number of new filings related to such conduct.
In Australia, ASIC’s chair, Joe Longo, has confirmed that AI washing is a serious emerging issue, and all Australian companies and their directors should be on notice. Recently, ASIC commenced criminal proceedings against the CEO of Metigy, which falsely claimed to be an AI marketing company.
Beyond AI washing, the use of AI systems may lead to other misleading statements risks. The outputs of AI systems can generate incorrect outcomes, leading to risks of misleading or deceptive conduct. For example, in Canada, an airline chatbot inaccurately explained the airline’s bereavement travel policy to a grieving passenger. Air Canada tried to argue that it was not liable for information provided by its chatbot, but a tribunal found in favour of the passenger.
To mitigate these risks, organisations should:
- conduct careful reviews of marketing collateral and public statements about AI use. AI washing and misleading conduct are not new forms of misconduct; lessons learned from greenwashing apply here;
- ensure claims are substantiated by evidence and records;
- address the product’s entire lifecycle in claims; and
- weigh the benefits of making statements against potential risks. When in doubt, seek a second opinion.
Confidential information
Some AI tools, including content generators like ChatGPT, invite users to input information, documents and prompts into the tool. This can create risks relating to use of third party confidential information and maintaining the quality of confidence in information of users and their organisations.
If users input confidential information of a third party into these AI tools, this could lead to an actionable breach of existing obligations of confidentiality owed to third parties.
Practically, AI providers may also take a copy of information inputted by users, which carries the potential for that information to be leaked or disclosed to others without any knowledge or control of the user. ChatGPT’s standard configuration, for example, retains all conversations. If users input confidential information of their organisation into an AI tool that retains inputs, there is a risk that control of the confidential information could be lost. This could cause the confidential information to lose its quality of confidence and therefore its value. A well-known example of this involved Samsung employees using ChatGPT and inputting valuable source code relating to microchips.
Another significant concern regarding use of confidential information with AI is the potential for sensitive intelligence and valuable intellectual property to become available to threat actors.
To mitigate the risks of misuse of confidential information it will be important to:
- implement and monitor adherence to internal policies regarding the use of confidential information with AI;
- conduct employee training to raise knowledge and awareness, and minimise the risk of confidential information being used in anything other than ‘closed system’ or private instance AI tools;
- review licensing arrangements with third parties to identify whether use of confidential information with AI is permitted and if so, any limitations on such use; and
- if your organisation uses confidential information to train or fine-tune AI tools within a closed system, consider making those tools available only to authorised personnel.
Future regulations
The Australian Federal Government has acknowledged that existing laws do not sufficiently protect against AI-related harms, prompting a comprehensive review of regulations governing AI. Currently, there is no specific law regulating AI use, and while various voluntary ethical principles exist, recent consultations aim to address this gap. The focus will likely be on high-risk AI applications, with multiple regulators involved in overseeing these reforms.
As regulatory measures increase, so too is the expectation of heightened litigation risks, particularly in regulatory enforcement. This trend mirrors the rise in cyber class actions, suggesting that as regulatory scrutiny intensifies, private litigation risks will also grow. Key areas of reform include mandatory guidelines for high-risk AI, a review of Australian Consumer Law concerning AI-enabled products, and proposed changes to the Privacy Act, which may require disclosure of automated decision-making processes. While the specifics of these reforms are still developing, they are anticipated to create significant regulatory and litigation challenges.
While currently in its early days, the evolving regulatory regime is likely to lead to heightened litigation risks. Particularly if individuals are allowed to take action under new Privacy Act reforms, we may see a consequent uptick in class action risks. It’s essential to adopt a “watch this space” approach moving forward.
Insurance considerations for organisations using AI
Given the expected heightened risks, the question arises – how do organisations insure against those risks?
- Emerging AI insurance policies: specific affirmative AI insurance policies are still emerging. For instance, one insurer has introduced a product to cover losses when an AI model fails. Additionally, new startups are developing product liability policies for AI developers, indicating a growing opportunity in this area.
- Evaluating existing insurance programs: organisations should assess how their existing insurance coverage address AI-related risks. Directors and Officers insurance can provide coverage for claims relating to the failure of management to exercise appropriate oversight and, if purchased, shareholder class actions related to AI products. Professional indemnity insurance is particularly relevant, as it may cover professional services which are delivered with the help of AI, though the terms of the policy require careful consideration. Property damage policies may apply if AI systems cause physical harm, such as a malfunctioning AI thermostat.
- Encouraging better AI practices: there is hope that insurance will promote better AI practices. As insurers assess and price the risks associated with AI, organisations may be incentivised to implement safer practices to lower their own insurance premiums. Overall, while the field is still developing, organisations are urged to prepare for the evolving landscape of AI and insurance.
For further information, please contact:
Tania Grey, Partner, Herbert Smith Freehills
Tania.Gray@hsf.com