Financial institutions have invested heavily into artificial intelligence (“AI”) and machine learning (“ML”) techniques globally, and in India, over the past decade. There are estimates that AI technologies could potentially contribute towards US$ 1 trillion in additional value for the global banking sector, and a World Economic Forum survey indicated that seventy seven per cent of all respondents (151 fintechs and financial institutions from thirty three countries) anticipated AI to possess a high or a very high overall importance in their businesses in the near future. Tangible use-cases in the financial sector have resultantly sprung, benefitting both customers and investors through robo advisors, portfolio optimisation, and algorithmic trading bots. Financial institutions on their part have benefitted greatly through chat bots handling consumer interactions and grievances, identity verification (including video KYC), predictive analytics to mitigate and minimise frauds, etc.
All this was before ChatGPT, which reached one million user sign-ups within five days of its release and has arguably introduced AI capabilities to the masses through a direct interface. Against this backdrop, familiar questions surrounding legal and regulatory considerations for financial service providers adopting AI and ML, arise. We focus on legal and regulatory risks in particular:
1. Bots undertake functions that are similar to regulated intermediaries, leading to supervisory and regulatory challenges for both regulators and the deployers of bots
The emergence of bots that perform functions similar to intermediaries regulated by the Securities and Exchange Board of India (“SEBI”) and the Reserve Bank of India (“RBI”) potentially present regulatory and supervisory challenges. The extension of the extant regulatory framework to AI/ ML technologies may hinder bots and other AI/ ML techniques from being deployed due to compliance requirements that are difficult to achieve.
As an example, the Securities and Exchange Board of India (“SEBI”) has not excluded the application of the Securities and Exchange Board of India (Investment Advisers) Regulations, 2013 (“IA Regulations”) to robo advisors in the past. Yet, the digital-only nature of robo advisors has led to compliance challenges, such as the [reported] requirement of investment advisers and investors entering into a written agreement. Such requirements are difficult for AI/ ML-driven solutions to achieve and may effectively restrict the deployment of bots that can perform such roles within India and adversely affect developers of such applications.
Other intermediaries, such as algorithmic trading bots that may be similar to stock brokers in certain aspects, may also face similar challenges. Financial sector regulators in India may need to re-visit regulations governing intermediaries that may not be sufficiently technologically neutral or fit-for-purpose, with a view to ensuring that regulations are outcome-focused.
2. Compliance costs arising out of forthcoming AI regulations
The nascency of AI/ ML adoption in the financial sector necessitates any AI-focused regulation to be risk-based and proportional. Compliance costs for AI deployers may arise, depending on the nature of regulations that may become applicable to financial service providers. In the event that the regulations set out prescriptive processes and compliance requirements, AI deployers may be required to modify their processes significantly to achieve compliance. Such regulations may also have the inadvertent effect of hindering innovation by focusing more on processes than outcomes. Regulators may consider phase-wise implementation of any future AI regulations, for example, by first requiring financial service providers to comply with a ‘do no harm’ standard with respect to AI and ML.
3. Substantially aligning AI with the ‘spirit’ of the law
RBI mandated grievance redress mechanisms for its regulated entities, including under the Reserve Bank – Integrated Ombudsman Scheme, 2021, and specific frameworks applicable to banks, do not prescribe or limit the use of technology (or AI/ ML-driven tools, in particular). However, financial service providers deploying such tools may need to assess whether the use of such tools compromises their obligations or adherence to the ‘spirit’ of the law, and the extent of human participation necessary to complement the deployment of such tools. As the Basel Committee on Banking Supervision notes, banks are in the process of developing best practices to minimise risks associated with their deployment of AI and ML.
On the other hand, the global standard-setting organisation for the securities market – the International Organization of Securities Commissions, released guidance on the use of AI and ML by market intermediaries and asset managers in September 2021, which covered inter alia, the designation of senior management personnel, who will be responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML, and mandated regulators to require firms to adequately test and monitor AI and ML techniques on a continuous basis.
4. Mitigating potential discriminatory practices and biases
While fintechs continue to leverage several new data points to accurately underwrite loans and conduct assessments of creditworthiness that go beyond traditional parameters, AI tools may also lead to discriminatory outcomes or reinforcement of biases. Financial service providers may need to undertake periodic audits and review of AI-driven processes to minimise such instances. As the Financial Stability Board noted in a 2017 paper, AI and ML models may result in discriminatory outcomes even where discriminatory characteristics are not input directly, leading to legal challenges around the legality of such models in several jurisdictions.
5. Protecting the interests of vulnerable consumers
As a recent October 2022 discussion paper by the Bank of England on AI and ML notes, there may be specific obligations in relation to ‘vulnerable’ consumers and in instances where AI systems do not sufficiently consider the needs of such classes of customers, they may be more susceptible to risk and harm. During marketing and customer acquisition processes, AI systems may also need to be trained to not target specific categories of customers for selling of financial products and services that carry an unsuitable risk profile. This is particularly important in the Indian context, where financial literacy rates remain low.
While India, like its global counterparts, continues to identify a framework to regulate certain aspects of AI and ML while fostering innovation, various stakeholders – financial consumer groups, financial service providers, AI and ML developers, and the regulators, must find common solutions to reduce emergent risks that arise out of such technologies. Financial service providers may also need to independently assess the use of AI and ML in their own processes from time to time, and identify whether such use is aligned to legal and policy objectives, particularly in an Indian context. While financial service providers using AI and ML will be affected by forthcoming regulations, there is potential for adverse impact to be minimised through company-level policies that guide responsible AI development and deployment.
For further information, please contact:
Lily Vadera, Cyril Amarchand Mangaldas
lily.vadera@cyrilshroff.com