Legal Implications of AI Deployment by Boards
The use of artificial intelligence by companies is both inevitable and fraught with challenges and risks, for their boards of directors in particular. Investors and competitive pressures call for it as it promises to enhance efficiency and performance, whilst regulators and stakeholders (employees and communities, but investors too for their need to evaluate performance) want to know and understand how AI informs companies’ (algorithmic?) decision-making processes. For their boards, the use of AI is both an issue of transparency and of liability.
The potential for the use of AI in companies is unlimited: recruitment, customer profiling and credit decisions are well-known examples of AI-use but more generally it can underlie every single corporate decision-making process: and every decision is in the end the responsibility of the directors, faced with the dilemma of opportunities vs risks (and mitigation).
The risks are not intrinsically new for directors: employees and suppliers make mistakes and accidents happen (whether industrial or technological) and the standards by which directors are judged are also well established. The difference with AI lies principally in its power and its relative independence from the subject matter to which it is applied: a black hole for directors. What data has been used, what bias could have affected the outcome? And in turn, what erroneous decision has been taken, what loss to the company (financial or reputational) or third parties has been wrongly caused? And finally, who is accountable—and under what standard?
In the United Kingdom and France, existing corporate and fiduciary duties are the basis for evaluating board responsibilities. The Companies Act 2006[1] outlines the statutory duties of directors, including the duty to promote the success of the company acting in good faith[2] and to exercise reasonable care, skill and diligence. Similarly, the French Code de commerce[3] and the jurisprudence refer to the duty to act in the best interests of the company, a standard that incorporates prudence, diligence and loyalty.
As with all other matters which the directors cannot be expected to know personally or control directly, directors must be able to understand where and how AI tools are applied to the companies’ operations, and what their inherent limitations and associated risks are. And directors must be able to explain where and how those tools have been used to demonstrate (in the event of challenge but also to the regulators, if any: listing authorities and Environmental, Social and Governance reporting, for example) adequate supervision of both the deployment and the use of AI. No more than ‘my dog chew my homework’, ‘the algorithm told us to do it’ will not absolve of responsibility.
It is therefore both an issue of liability and transparency. Of the two, transparency may be the easier one to comply with. Companies (large companies certainly) have become well versed into the exercise of reporting on their ESG, CSRD[4] and GDPR-type[5] obligations and in the same manner can add an element of AI reporting[6]. Liability as it relates to AI raises, however, an additional layer of difficulty. Practically, boards may mandate AI risk assessments, implement AI governance policies and training at all levels of operations and ensure proper oversight of third parties (e.g. suppliers and service providers). Yet, there is an additional element of unverifiable outcome that is intrinsic to AI which may be difficult to protect against; AI-inspired decisions will nevertheless be judged under traditional legal standards of directors’ liability, particularly where it involves automated decision-making that impacts individuals.
Failure to implement adequate framework and (where they are required) to make proper disclosures could result in regulatory sanctions, shareholder litigation (if the breach harms the company’s reputation or value), other stakeholder litigation (where they allege a breach harmed them) and ultimately directors’ and officers’ liability. In turn, this is a pressing matter for legal advisors and insurers, where negligence or regulatory breaches could mix in complex disputes. AI-related risk is not a banal IT matter; it goes to the core of legal and fiduciary risk management.
New regulatory initiatives are likely to increase duties and scope for liability. In the European Union, the AI Act of June 2024 categorises AI systems according to risk and imposes additional compliance obligations on high-risk applications, with a particular focus on compliance obligations for providers, importers, distributors and deployers of AI. The UK is developing a principles-based framework for existing regulators to interpret and apply within their sector-specific domains.
In Malaysia, despite the absence (so far) of a dedicated AI statute, the approach reflects an increasingly proactive stance through initiatives such as the National AI Roadmap[7] and data governance principles issued by the Malaysian Communications and Multimedia Commission.
If AI is rapidly transforming corporate life, the lesson is clear: it is a new source of corporate liability; it changes how decisions are made but it does not change who is ultimately accountable.
For further information, please contact:
Pierre Brochet, Azmi & Associates
pierre.brochet@azmilaw.com
- Section 172: Duty to promote the success of the company. Section 174: Duty to exercise reasonable care, skill and diligence.
- Having regard also to matters other than merely those of its members.
- Articles L223-22 and L225-25, as well as pursuant to Loi PACTE of 2029 (social interest).
- EU 2022/2464 Corporate Sustainability Reporting Directive.
- General Data Protection Regulation 2016/679 of the European Union.
- See SAP Integrated Report 2024 at pages 213 et seq. (Responsible AI).
- The National Guidelines on AI Governance and Ethics, September 2024.