India – SEBI’s Proposed New Amendments On Usage Of AI Tools By Regulated Entities.
Introduction:
The rapid development and deployment of Artificial Intelligence (“AI”) and Machine Learning (“ML”) tools by market participants over the course of the past year prompted the Securities and Exchange Board of India (“SEBI”) to issue, on November 13, 2024, a consultation paper on “Proposed amendments with respect to assigning responsibility for the use of Artificial Intelligence Tools by Market Infrastructure Institutions, Registered Intermediaries and other persons regulated by SEBI” (“Draft Amendments”), seeking public suggestions on a series of amendments to the extant regulations.
Regulatory Outlook:
In order to create an inventory of the AI/ML landscape to gain an understanding of the adoption of such technologies in the securities market, SEBI had previously specified requirements for the reporting of AI and ML applications and systems offered and used by the following entities:
- Stock Brokers and Depository Participants – through issuance of a Circular dated January 4, 2019,
- MIIs (Stock Exchanges, Depositories and Clearing Corporation) – through issuance of a Circular dated January 31, 2019, and
- Mutual Funds, Asset Management Companies, Trustee Companies, Board of Trustees of Mutual Funds – through issuance of a Circular dated May 9, 2019.
Any set of applications/ software/ executable systems (computer systems) offered to investors (individuals and institutions) by Regulated Entities (“REs”) to facilitate investing and trading, to disseminate investment strategies and advice or to carry out compliance operations, where AI/ ML is portrayed as a part of the public product offering or under usage for compliance or management purposes, were included within the ambit and scope of the above mentioned SEBI circulars.
REs were thus required to intimate SEBI of any such use or deployment of AI/ ML technologies on a quarterly basis.
SEBI’s Concerns:
The use of AI and ML technologies, especially AI Neural Networks (“NN”) and Large Language Models (“LLM”), provide inherent benefits to REs, such as increased efficiency in operational and compliance functions, accuracy in decision making, risk management, and scaling, leading to its increased use in Quantitative, Algorithmic and High Frequency Trading.
The efficient analysis and dynamic learning of AI and ML functions have allowed REs to increasingly delegate operations to AI systems, including decision making functions that affect investor outcomes. However, the rapid adoption of emerging AI and ML technologies is of concern to SEBI and other regulators for several reasons, including:
- Data sets quality control risk: AI applications derive their outputs from two primary factors: user inputs and the data sets that they have been exposed to. Changing a single factor in a data set would result in the AI learning and processing data in an entirely different manner that may not result in the desired use case outcome. This risk is heightened due to the vast amount of data that AI systems used by REs process, coupled with their innate ability to learn from data and apply it in different scenarios.
- Transparency: Concerns around the transparency of AI algorithms, accountability for AI-assisted decisions, explainability of AI-made decisions and ethical considerations have remained an area of concern since the inception of sophisticated AI technologies. Deep learning algorithms and other complex AI models often operate as “Black Boxes”, providing accurate predictions without clear explanations for their reasoning. This lack of transparency raises questions about the accountability of AI-generated outcomes in the legal domain.
- Privacy and confidentiality risk: Privacy and confidentiality risks with AI technology continue to evolve with its potential. REs must ensure that AI models are trained using data sets that either do not require consent to obtain and use (such as publicly available data), or for which consent has been adequately obtained, especially in light of the enactment of the Digital Personal Data Protection Act, 2023 (“DPDP Act”).
Proposed Amendments:
In light of the above, SEBI has proposed amendments to the Securities and Exchange Board of India (Intermediaries) Regulations, 2008, the Securities Contracts (Regulation) (Stock Exchanges and Clearing Corporations) Regulations, 2018 and the Securities and Exchange Board of India (Depositories and Participants) Regulations, 2018. The proposed amendments broadly aim to ensure the following:
- Compliance: REs shall be responsible for ensuring that their AI and ML tools are compliant with all applicable laws in force, irrespective of the method, extent or degree of adoption of such technologies or tools.
- Data Privacy: REs shall be solely responsible for ensuring the privacy, security and integrity of investor and stakeholder data. This will especially include the data maintained by REs in a fiduciary capacity.
- Operational Liability: REs shall be held responsible (and are liable to investors) for all consequences arising from outputs from the usage of AI or ML technologies that are relied upon for any purpose.
Implications and Way Forward:
The Draft Amendments emphasize compliance with existing laws, liability for the use of AI and ML technology and its consequences, and responsibility for the privacy, security and integrity of investor and stakeholder data.
The homogenous regulatory approach taken by SEBI displays a uniform regulatory attitude towards the use of AI and ML technologies, prioritising investor and stakeholder welfare and transparency in the development and use of such emerging technologies.
Furthermore, REs are likely to be held responsible for their reliance on and use of AI/ML technologies, and it is thus advised that REs exercise caution in the development or procurement of such technologies from third parties, and exercise human oversight of AI/ML outputs in respect of decision making functions.