Since OpenAI’s unveiling of ChatGPT in November 2022, and the subsequent announcements regarding Microsoft’s updated “Bing” and Google’s “Bard”, AI-enabled natural language processing tools have been the talk of the town. They provide businesses with exciting possibilities to increase efficiencies; among other things, they can be used as a search function, to create first drafts of work-related documents, to organize large volumes of unstructured data, and generate new code and check existing code. However, they also bring with them a swathe of new risks, uncertainty and an evolving regulatory landscape which can be difficult to navigate.
Given the volume and significance of complex issues associated with AI tools, Linklaters is planning to release a series of client alerts on this subject in the weeks and months ahead. This initial note is intended both (a) to provide a high-level overview of some of the key things your organization should be thinking about in connection with these tools and (b) to flag one key takeaway – the importance of designing and implementing an AI policy and safeguards for your organization.
AI policies and safeguards
Although it may disagree and tell you that it’s a machine learning model rather than an “app”, ChatGPT has been widely reported to be the fastest-growing app in history, having reached 100 million monthly users in just two months following its launch. If you don’t already have employees who are using ChatGPT or a similar AI tool in connection with their day-to-day activities, it’s very likely that you will soon.
In order to effectively mitigate the risks posed by AI tools, it’s crucial to (a) implement proper structures around the use of those tools, (b) monitor that use, and (c) regularly assess and evaluate the effectiveness of those structures. We’re seeing businesses implement AI policies which divide use cases into three categories: (i) prohibited use cases, (ii) use cases which are permissible but require additional authorization and safeguards, and (iii) generally permitted use cases.
We’ve been helping clients create custom AI policies and safeguards appropriate for their businesses and organizations. Please reach out to our team to discuss in connection with your business and priorities.
In addition, below we’ve set out additional key issues for you to consider with respect to AI tools, which may help serve as a starting point for discussions regarding the design of AI policies and safeguards.
Output and human oversight and review
Although impressive, as Linklaters found in our review of the legal advice produced by ChatGPT, the tool’s answers are not always correct. The tool also has a tendency to offer single, seemingly definitive answers which we’ve seen offer mistakes repeatedly in the past (for example, Google’s “snippets” function). The authoritative backing of AI exacerbates the problem even further now. Additionally, trained from data back in 2021, ChatGPT has gaps in its knowledge as to events that have taken place after that time. The model itself gives warnings that it may generate incorrect responses, and it’s important also to note that OpenAI’s Terms of Use (a) provide that ChatGPT is provided “As Is”, (b) disclaim all warranties, and (c) limit OpenAI’s liability to the greater of the amount paid for the service or US$100.
If relied upon without any human oversight or intervention, reliability of AI outputs can become an issue. Depending on the use case, this could be dangerous and/or costly to your organization.
You should ensure that: (a) any and all data inputted into AI tools is accurate and complete; (b) greater protections are in place if a tool is being used for any automated decision-making; and (c) outputs are reviewed manually wherever possible. It’s important to note that human review of the output of AI tools is also crucial to mitigate several other risks described below in this note, such as risks related to bias/discrimination and intellectual property.
Data Protection, Confidentiality and Legal Privilege
It remains important to maintain the confidentiality, legal privilege, and integrity of the information your organization holds. AI tools are known to copy, retain, and further process the data entered into them in order to produce better and more tailored results the next time you use them. This leaves your organization’s information at risk of being accessed by third parties, and risks contravening data protection laws, violating contractual obligations, and/or inadvertently waiving legal privilege.
To cover against these risks, businesses are seeking to:
- Ensure that any data given to the AI tool does not violate contractual provisions in place with customers or third parties;
- Prevent and prohibit any proprietary, privileged, or confidential information from being inputted into the tool; and
- Prevent personal data (and especially sensitive data) from being inputted into AI tools. To the extent that personal data is being provided to ChatGPT or a similar AI tool, it’s important to:
- Note that, as the providers of ChatGPT and other AI tools use data for their own independent purposes: (a) such providers will almost certainly be deemed “controllers” or “third parties” (rather than “processors” or “service providers”) under GDPR and U.S. state privacy laws; and (b) the provision of personal information to such tools may be deemed a “sale” of personal information under certain of such U.S. state privacy laws like the California Privacy Rights Act (CPRA);
- Ensure that: (a) your privacy policy is updated for this use; (b) consumers have the opportunity to provide opt-in consent and/or to opt out (or withdraw consent); and (c) with respect to withdrawn consents and opt-out and/or limitation elections in certain regulated jurisdictions (e.g., California), such personal information is not inputted into the tool; and
- Consider, on a practical level, how you might address erasure/deletion requests and other consumer rights requests in the context of AI tools.
It’s a good idea to remind employees that, in all circumstances, your privacy policy and data protection and confidentiality obligations must be adhered to.
In addition to the data-related risks associated with the use of AI tools, AI tools also pose data protection and confidentiality risks even if you don’t use such tools. Among other things, we anticipate that AI tools will be used to create more sophisticated malware and phishing scams. Members of your teams responsible for the functions likely to be targeted by those attacks (e.g., Engineering and Human Resources, respectively) should be made aware of, and trained to identify and prevent, such risks.
Intellectual property
There are several core IP-related risks to be wary of, including the following risks related to content generated by AI:
- Content created exclusively by AI tools is not protectable under U.S. intellectual property laws. The U.S. and many other jurisdictions around the globe require some element of human contribution to create a copyright-protected work.
- Use of AI-generated content also creates a risk of inadvertently infringing third-party IP rights. Since AI-generated content relies on existing data and content to inspire its results, the results themselves may include third-party IP to varying degrees.
You should ensure that your employees are aware of the importance of (a) manually reviewing the AI output to ensure that third-party IP rights aren’t being infringed and (b) to the extent possible, introducing human authorship elements into the content to attempt to benefit from available IP protections.
Use of AI tools also raises significant considerations specific to patentability. In the U.S., for patentability, a patent application must be filed within 1 year following the initial disclosure of the underlying invention. The submission of information relating to an invention may be deemed to constitute a “disclosure” that starts the running of the 1-year clock. If not appropriately tracked, a submission could result in your being prevented from receiving patent protection on your invention.
Bias and discrimination
Bias and discrimination are well known risks for any artificial intelligence system. ChatGPT and similar tools use AI to scrape the web, summarize their findings, and generate allegedly accurate and quick responses to users’ questions. But if the data scraped from the web is itself biased, these biases will inevitably be inherited by the responses generated.
Removing this type of embedded bias is tricky, but it shows why some major markets are looking to regulate AI in a thorough and swift way. Data protection regulators have been focusing on artificial intelligence for many years because of its potential for discrimination, and such concerns have started to make their way into draft legislation on both sides of the Atlantic. Notably, the EU’s proposed AI Act contains express obligations to examine training, validation, and test data for possible biases and will allow the processing of special category personal information for the purpose of removing bias. Similar obligations with respect to algorithms were included in the U.S. federal draft American Data Privacy and Protection Act that made surprising progress during the last session of Congress and is likely to reappear in the 118th Congress.
Prohibited or “high risk” use cases
Consider also whether your use of AI tools may represent a “high risk” use case. The EU’s proposed AI Act designates a list of use cases as being “high risk”, which are then subject to strict conformance and other mandatory requirements. These include, for example, using AI to make any employment assessments. Similar legislation is following suit in major markets. For example, New York City Local Law 144 of 2021 requires employers (a) to conduct bias audits on automated employment decision tools, including those that utilize artificial intelligence and similar technologies and (b) to provide certain notices about such tools to employees or job candidates who reside in the city.
Consumer protection and wiretapping
State and federal consumer protection laws prohibit deceptive use of AI content for marketing purposes. Any external-facing AI outputs or AI content which is used to benefit consumers carries with it risk of claims regarding unfair or deceptive practices. To the extent any AI outputs may be used in consumer-facing content, the use of an AI tool or bot should be transparent with consumers and ensure compliance with applicable consumer protection laws, including California’s B.O.T. Act. Taking a chatbot for example, many organizations often model the tool to first introduce itself as “Hi, I’m not a human, I’m a chatbot”, to make consumers aware.
In addition, there have been dozens of class action cases brought under state wiretapping laws (in particular, the California Invasion of Privacy Act) in connection with site chat functionality. Use of ChatGPT and similar AI-powered chatbot tools seems likely to attract further attention from the class action plaintiffs bar; while measures such as introducing the feature as a chatbot, marking the chatbot with the name of the technology provider (“Powered by OpenAI’s ChatGPT”), and including a link to the site’s privacy policy (describing the data processing associated with the chatbot) may mitigate such risk. In certain instances, the risk may not be able to be eliminated completely, depending on the functionality of the chat technology and the practices of the chat technology provider.
Additional measures
Training
Consider what periodic training can be implemented across your organization to ensure that your employees are aware of the risks inherent in using ChatGPT and similar AI tools. Training should highlight, among other things: [outdent dashes below to be under text above – see Section 4]
- Prohibited use cases;
- “High risk” use cases which require additional approvals, risk assessments, human oversight and/or detailed documentation;
- Best practices for ensuring that the data entered into any AI tool is accurate and complete, and doesn’t include any prohibited data or data sets; and
- The relationship between (a) your organization’s policies and codes of conduct and (b) the use of AI tools.
Good housekeeping
Tactical organizational and administrative measures can go a long way to mitigating many of the risks presented above. As part of such measures, (a) clearly and conspicuously identifying content created by AI tools is crucial both internally (to ensure appropriate review and necessary authorship) and externally (to mitigate consumer protection issues) and (b) maintaining detailed records of your end-to-end processes, and assigning internal owners for such processes, can provide consistency and prepare you for potential regulatory scrutiny.
Vendor management
Please note that, in addition to ensuring that your employees are aware of (and adhere to appropriate measures with respect to) the risks, obligations, and considerations with respect to AI tools, it’s important to ensure that your vendors are appropriately bound by, and comply with, obligations that align with your AI policy and safeguards. This may entail (a) including additional provisions in your contracts with vendors and (b) adding AI-focused elements to your diligence of active and potential vendors. For example, in the case of a vendor that provides recruiting and/or HR-related products and/or services, you may need to (i) determine whether that vendor utilizes the types of automated employment decision tools described in section 6 above and (ii) if so, to ensure that such tools are appropriately audited and you provide all required notices.
In addition to helping clients with AI policies and safeguards, we’ve been discussing these and many more issues relating to AI tools with clients from a range of sectors. Please reach out to a member of our team to discuss further.