Introduction
The development and adoption of Artificial Intelligence (“AI“) has seen a global surge in recent years. It is estimated that AI has the potential to add USD 957 billion, or 15 per cent of the current gross value added to India’s economy in 2035. It is projected that the AI software market will reach USD 126 billion by 2025, up from USD 10.1 billion in 2018. There is an increased application of AI to a variety of private and public use, and it is expected that AI usage will become ingrained and integrated with society.1
In India, large-scale applications of AI are being implemented and used across various sectors such as healthcare, agriculture, and education to improve the potential in these sectors. In February 2021, the NITI Udyog released the approach document, proposing principles for ‘responsible AI’ development (“Approach Document“).
AI is set to be a “defining future technology”; but what exactly is AI, and what are the challenges and considerations for regulating AI?
The scope of this article is to examine the challenges and considerations in the regulation of AI in India. We have also examined the approach for the regulation of AI in other developed jurisdictions such as the European Union and the United States. This article has relied on the Approach Document to understand the systems considerations and societal considerations which come up from the implementation of AI into technology and society. The AI considered here is ‘Narrow AI’, which is a broad term given to AI systems that are designed to solve specific challenges that would ordinarily require domain experts. Broader ethical implications of ‘Artificial General Intelligence’ (AGI) or ‘Artificial Super Intelligence’ (ASI) are not considered in this article. Further, systems considerations considered in this document mainly arise from decisions taken by algorithms2.
Definition of Artificial Intelligence
The Approach Document describes “Artificial Intelligence” as “a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act. Computer vision and audio processing can actively perceive the world around them by acquiring and processing images, sound, and speech. Natural language processing and inference engines can enable AI systems to analyse and understand the information collected. An AI system can also take decisions through inference engines or undertake actions in the physical world. These capabilities are augmented by the ability to learn from experience and keep adapting over time”3.
The integration of AI into technology and society gives rise to unique challenges. Further, as AI becomes more sophisticated and autonomous, concerns with respect to accountability, bias, and societal well-being may arise.
Considerations while Regulating AI
The following main considerations can be identified while implementing AI i.e. (i) systems considerations and (ii) societal considerations4. We further analyse the regulatory implications stemming from such considerations.
(i) System Considerations: Systems considerations are implications that have direct impacts on citizens (or primary ‘affected stakeholders’) being subject to decisions of a specific AI system. These typically result from system design choices, development, and deployment practices5.
Some of the system considerations are:
(a) Potential for bias: Though automated solutions are often expected to introduce objectivity to decision-making, recent cases globally have shown that AI solutions have the potential to be ‘biased’ have been identified and tend to be ‘unfair’ for certain groups (across religions, race, caste, gender, genetic diversity. The emergence of bias in AI solutions is attributed to several factors arising from various decisions taken across different stages of the lifecycle and the environment in which the system learns. The performance of the AI solution is largely dictated by the rules defined by its developers. Responses generated by AI solutions are limited by the data set on which it is trained. Hence, if the data set includes biased information, naturally, the responses generated will reflect the same bias. While this is not intentional, it is inevitable, since no data set could be free from all forms of bias. The AI solution cannot critically examine the data set it is trained on, since it lacks comprehension and is hence incapable of eliminating the bias without some form of human intervention.
Bias is a serious threat in modern societies. We cannot, therefore, risk developing AI systems with inbuilt biases. The role of regulation in this regard would be to specify and penalize any development of AI with such biases. Regulation must also prescribe that developers should invest in research and development of bias detection and mitigation and incorporate techniques in AI to ensure fair and unbiased outcomes. Legislation must further provide for penalties on developers developing AI with biased outcomes.
(b) Accountability of AI decisions: In the development of AI, it is understood that different entities may be involved in each step of the development and deployment process. Different entities associated with complex computer systems make it difficult to assign responsibility for accountability and legal recourse.
Since there are many individuals or entities involved in the development of AI systems, assigning responsibility or accountability, or identifying the individual or entity responsible for a particular malfunction may be difficult. Consequently, pursuing legal recourse for the harm caused is a challenge. Traditional legal systems allocate responsibilities for action and consequences to a human agent. In the absence of a human agent, it is essential for regulation to find a methodology for identifying or determining the individual or entity involved. All stakeholders involved in the design, development and deployment of the AI systems must specifically be responsible for their respective actions. The imposition of such obligations can be achieved through regulation.
(ii) Societal considerations: Societal considerations are implications caused due to the overall deployment of AI solutions in society. This has potential repercussions on society beyond the stakeholder directly interacting with the system. Such considerations may require policy initiatives by the Government6.
One of the societal considerations is “impact on the job”. The rapid rise of AI has led to the automation of several routine job functions and has consequently led to large-scale layoffs and job losses. The use of AI in the workplace is expected to result in the elimination of a large number of jobs in the future as well.
Regulation through appropriate provisions in labour or employment law legislation can in this regard check and ensure that work functions are not arbitrarily replaced by AI. It is well understood that corporations are driven by profit and hence AI may be a cost-effective option. Nevertheless, it is possible to regulate through legislation any such replacement of human jobs by AI in the larger interests of society.
Indian Position
Currently, India does not have codified laws, statutory rules or regulations that specifically regulate the use of AI. Establishing a framework to regulate AI would be crucial for guiding various stakeholders in the responsible management of AI in India.
There are certain sector-specific frameworks that have been identified for the development and use of AI.7 In the finance sector, SEBI issued a circular in Jan 2019 to Stockbrokers, Depository Participants, Recognized Stock Exchanges and Depositories on reporting requirements for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used.
In the health sector, the strategy for National Digital Health Mission (NDHM) identifies the need for the creation of guidance and standards to ensure the reliability of AI systems in health.
Recently on June 9, 2023, the Ministry of Electronics and Information Technology (MEITY), suggested that AI may be regulated in India just like any other emerging technology (to protect digital users from harm). MEITY mentioned that the purported threat of AI replacing jobs is not imminent because present-day systems are task-oriented, are not sophisticated enough and do not have human reasoning and logic8.
Regulatory Position on AI in the European Union and the United States
The European Union: In April 2021, the European Commission proposed the first European Union (“EU”) regulatory framework for artificial intelligence9 (“AI Act“).
The AI Act defines an “artificial intelligence system (AI system)” as a “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments“10. The AI Act would regulate all automated technology. It defines AI systems to include a wide range of automated decision-makers, such as algorithms, machine learning tools, and logic tools.
This is the first comprehensive regulatory framework for regulating AI and is part of the EU’s strategy to set worldwide standards for technology regulation. Recently, on June 14, 2023, the European Parliament approved its negotiating position on the proposed AI Act between the representatives of the European Parliament, the Council of the European Union and the European Commission for the final shape of the law. The aim is to reach an agreement by the end of this year11. The second half of 2024 is the earliest time the regulation could become applicable to operators with the standards ready and the first conformity assessments carried out12.
The AI Act aims to ensure that AI systems used in the EU market are safe and respect existing laws on fundamental rights and EU values. The AI Act proposes a risk-based approach to guide the use of AI in both the private and public sectors. The AI Act defines three risk categories: unacceptable risk applications, high-risk applications, and applications not explicitly banned. The regulation prohibits the use of AI in critical services that could threaten livelihoods or encourage destructive behaviour but allows the technology to be used in other sensitive sectors, such as health, with maximum safety and efficacy checks. The AI Act would apply primarily to providers of AI systems established within the EU or in a third country placing AI systems on the EU market or putting them into service in the EU, as well as to users of AI systems located in the EU.
The United States: As per press reports, in a meeting with President Biden at the White House, seven leading artificial intelligence companies including Google, Meta, OpenAI and Microsoft agreed to a series of voluntary safeguards that are designed to help manage the societal risks of AI and resultant emerging technology. The measures, which include independent security testing and public reporting of capabilities, were prompted by some experts’ recent warnings about A.I. The U.S. is at the commencement of what is expected to be only the beginning of a long and difficult path toward the creation of rules to govern an industry that is advancing faster than lawmakers typically operate.
Conclusion
AI is growing at a fast pace and is rapidly being integrated into society. There is therefore definitely a need to regulate AI to prevent system and societal risks. There are several challenges in regulating AI, making the task seem impossible to achieve. Traditionally as well, the law has not been able to keep up with new technologies. However, if the regulators work at understanding the technology involved in AI and the system and societal considerations, comprehensive and effective legislation on AI may be created. India may also draw inspiration from the legislation in the EU in this regard. Legislation has thus a key role to play in ensuring effective and fair implementation of AI in society and technology.
Footnotes
1. Approach Document Page 6.
2. Approach Document Page 7.
3. Approach Document Page 7.
4. Approach Document Page 8.
5. Approach Document Page 9.
6. Approach Document Page 9
10. Art. 3 No. 1 of the AI Act.
11. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai