1 June, 2019
Artificial Intelligence (AI) is a rapidly emerging field, with regulators needing to consider how AI should be legally and ethically controlled in its development and use. We report two key developments, one in Europe and one in Australia.
Europe's Ethics Guidelines
On 8 April 2019, the High-Level Expert Group on Artificial Intelligence (AI HLEG) published the 'Ethics Guidelines for Trustworthy Artificial Intelligence' (Ethics Guidelines). The Ethics Guidelines contain three components, based on the fundamental rights enshrined in the Charter of Fundamental Rights of the European Union. The AI HLEG maintains that these three components should be employed throughout an AI system's life cycle, including for AI to be:
- lawful, by complying with the applicable laws and regulations;
- ethical, invoking all necessary ethical principles and values; and
- robust, and implemented in a safe, secure and reliable manner.
- The AI HLEG contends that there will be inevitable conflicts between all three components (for example, the scope of a certain law may transcend ethical boundaries). Ideally, all three components should work collectively to create a constructive framework for the deployment of AI systems.
Ethics Guidelines
According to the report, the Ethics Guidelines are intended to assist all stakeholders designing, developing, deploying, implementing, using or being affected by AI. Stakeholders can opt in to use the guidelines as a method and framework in the development, use and operation of their AI systems. The Ethics Guidelines are broken up into three core chapters:
- Chapter I identifies the relevant ethical principles and values that must be adopted in the development and operation of AI systems. The Chapter outlines the foundations of Trustworthy AI, by reflecting on four key 'ethical imperatives' that should be adhered to: 1) respect for human autonomy; 2) prevention of harm; 3) fairness; and 4) explicability. The Chapter also acknowledges the impact of AI on more vulnerable groups such as children and persons with disabilities.
- Chapter II translates the ethical principles identified in Chapter I into seven key requirements that should be effected throughout the AI system's life cycle. The non-exhaustive list includes 1) human agency and oversight; 2) technical robustness and safety; 3) privacy and data governance; 4) transparency; 5) diversity, non-discrimination and fairness; 6) societal and environmental wellbeing; and 7) accountability. Depending on the context, type of AI system, and stage of the AI system's cycle, each requirement will be given different weight.
- Chapter III provides a non-exhaustive Trustworthy AI assessment list for the requirements outlined in Chapter II.
- The AI HLEG is currently inviting all stakeholders to pilot and provide feedback on the Trustworthy AI assessment list in Chapter III, which will be revised by early 2020. The piloting phase will commence in June this year, and continue until the end of 2019
Australia's Proposed Ethics Framework (A Discussion Paper)
On 5 April 2019, the CSIRO’s Data61 released a consultation paper proposing eight principles to guide developers, industries and government throughout Australia (Proposed Ethics Framework). The Proposed Ethics Framework outlines the foundational steps towards mitigating the risks of AI by contextualising ethical considerations in line with AI technologies. As with the Europe's Ethics Guidelines, the DIIS notes that Australia's Proposed Ethics Framework does not intend to provide legal guidance, nor substitute the law. Instead, the paper proposes a framework that highlights the prevalent and anticipated ethical issues related to the use of AI technologies in Australia. The following eight principles are proposed as an ethical framework:
- The AI system must generate benefits that are greater than its costs;
- Civilian AI systems must not harm or deceive people and should be operated in ways that minimise any negative consequences;
- The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws;
- All AI systems must ensure that private data is protected and kept confidential. It should also prevent data breaches that could cause harm to people;
- The development and use of AI systems should not result in unfair discrimination against individuals, communities or certain groups. Training data should be free from bias or characteristics which may cause the algorithm to behave unfairly;
- People should be informed when an algorithm is affecting them, what information is being used to create the algorithm, and how it is being used to make decisions;
- When an algorithm affects a person, there must be a process which allows that person to challenge the use or output of the algorithm; and
- People and/or organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm.
As part of this consultation, the Department of Industry, Innovation and Science (DIIS) welcomes written submissions, which will close on Friday, 31 May 2019.
Putting Principles into Practice
Algorithmic impact assessments (AIAs) are designed to assess the potential impact that an AI system may have on the public. Adopting and using auditable impact assessments will help encourage accountability and ensure that ethical principles are considered and addressed before the AI system is used. AI systems should be constantly assessed for risks, specifically when AI is being used for vulnerable populations and minorities.
For further information, please contact:
Patrick Fair, Partner, Baker & McKenzie
patrick.fair@bakermckenzie.com