Editor’s Note: This is the first article in a series we’re writing about the recent Executive Order on artificial intelligence. Today’s piece covers some high-level takeaways from the EO, but please stay tuned for deeper dives, too!
On October 30, President Joe Biden signed an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” establishing a significant, but simple step forward in regulating the use of AI within the United States. The Executive Order seeks to find a balance between fostering innovation and protecting national security, consumer rights, and personal liberties. “AI is all around us,” Biden said before he signed the order. “To realize the promise of AI and avoid the risk, we need to govern this technology.”
The Executive Order is intended to directly impact the use of AI in a number of industries, including legal. Indeed “Forensic Analysis,” which we interpret to mean all activities falling under the Electronic Discovery Reference Model (EDRM), is explicitly called out in the Executive Order. The EO stipulates that the federal government will “Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis” (emphasis added).
In a multi-part blog series, we’d like to dive into the Executive Order and its implications on e-discovery, data breach notifications, and investigations. To kick us off, we are sharing some of our key takeaways from the Executive Order.
AI Safety
The executive order prioritizes AI safety principles to protect Americans from potential technological risks associated with advanced AI systems. Notable highlights are that the order requires developers of powerful AI systems to share safety test results and critical information with the US government when their models pose national security, economic, or public health and safety risks, ensuring thorough testing and security.
The order also emphasizes the development of robust standards and testing procedures for AI systems, particularly in critical infrastructure, to ensure their safety and reliability. This has a number of broad-reaching implications for the e-discovery industry.
The requirement for developers to share safety test results and critical information with the US government highlights the importance of government oversight in AI safety. In the context of e-discovery, this means that government agencies may become involved in legal proceedings where AI systems are a subject of investigation. ESI related to AI safety, shared with the government as mandated by the Executive Order, may need to be produced as evidence during legal discovery processes.
The emphasis on AI safety and reliability in the EO underscores the potential legal liability of organizations that develop and deploy AI systems, particularly if these systems are involved in incidents or accidents. As a result, e-discovery processes may need to consider data related to AI incident reports, safety tests, and any information relevant to legal actions, such as lawsuits or regulatory investigations stemming from AI-related issues.
The Executive Order impacts the left side of the EDRM as well. Good e-discovery starts with good preservation processes and policies and organizations should start to think about incorporating AI into their existing governance frameworks. The development of robust standards and testing procedures for AI systems means that organizations need to preserve and collect data related to AI systems’ design, testing, and safety measures. This includes data on AI models, testing methodologies, and safety documentation, which may become essential in potential legal cases.
Data Privacy
There are a number of key provisions regarding data privacy in the Executive Order. Of them, there are two significant call-outs.
The Biden Administration called on Congress to act together to pass comprehensive federal data privacy legislation. Right now, in the United States there are over 55 state and federal laws that regulate the notification process following a data breach. Each law has different thresholds for notification and defines personal data differently.
Though there are many overlapping laws, the different regulations make notifying in a globalized world onerous and expensive. For many years, privacy advocates have been pushing for unified, comprehensive regulation. According to Nicol Turner Lee, a senior fellow in governance studies and director of the Center for Technology Innovation, while the likelihood of getting a bipartisan bill in place right now is low, “…it sends a clear message that in order for us to have effective, resilient AI networks, we must have a data privacy standard in place that offers guidance … on what people can collect about us, whether that’s personally attributable information or biometric information. It’s really important. These systems work off of our personal data … I can’t tell people that enough. And without privacy legislation, it’s still a Wild West.”
If comprehensive, nationwide legislation is passed, it should drastically simplify the data breach notification process, and thus reduce workloads for cybersecurity and e-discovery professionals tackling this workflow today.
Another key provision is the focus on ensuring that personal data is protected so that it cannot be used in the creation of AI or expose personal information in its output. Depending on what enforcement looks like, this may pose an existential threat to the rapid development of AI products and ultimately to defensibility within e-discovery.
A key to defensibility is “repeatability,” which comes into question if data found to violate data privacy laws has to be removed from a model—ultimately changing its outcome—or if a model has to be taken off the market entirely. The industry will need to pay close attention to how the DOJ and other agencies are addressing these concerns, to ensure the defensibility of the products they’re adopting. Consumers should look for technology providers that already have established responsible AI principles, and who are proactively addressing these issues in their development of AI to de-risk their AI usage.
Evidence Handling and Investigations
For the e-discovery industry, arguably one of the most impactful components of the order is the commitment by the Administration that “The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”
AI-generated content can be abused in many ways. From misinformation campaigns that affect the outcome of elections, to fabricated news reports to incite violence, to more sophisticated phishing and cyber threat campaigns, to fabrication of evidence in an investigation or litigation, AI-generated “deep fake” content poses a significant threat. In response, the Executive Order emphasizes the need to protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The use of digital watermarking is proposed to clearly label AI-generated content, helping distinguish it from genuine content.
Watermarking is expected to significantly impact the e-discovery field; legal technology providers will be required to develop tools to extract, process, and analyze watermark information, ensuring the integrity of investigations in the presence of AI-generated data. Forensic professionals will need to develop processes and learn how to authenticate and verify that a watermark is valid. Additionally, there is a growing need to identify AI-generated content within evidence, driving the legal tech industry to enhance AI detection tools to maintain the precision and accuracy of e-discovery practices.
Soon, we can expect law firms to take on more and more cases related to AI. Though firms are already fielding these types of cases, AI investigations are somewhat in its infancy. The Executive Order makes a commitment to “Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI,” signaling that we can expect significant growth in this sector.
Issues at play may be related to data provenance in AI, which is determining what data was used in a model’s development (e.g., copyright infringement matters), whether an asset is generated by AI, how the model was used, how the model was trained, and what harms may have resulted from the model.
The Federal Landscape is Evolving
The order addresses the responsible government deployment of AI within the federal government, which will influence how government agencies use AI in their operations. It emphasizes the importance of guidelines, standards, and procurement processes for AI, which will affect how e-discovery practitioners interact with government data and documents, as well as impact certifications and procurement processes.
Indeed, the Federal Risk and Authorization Management Program (FedRAMP), “a government-wide program that promotes the adoption of secure cloud services across the federal government by providing a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services”—and provider of FedRAMP certification, a critical certification for technology companies looking to work within the government’s cloud infrastructure—has already released a statement clarifying their role in the Executive Order.
They shared that, “In collaboration with stakeholders from the commercial and federal space, FedRAMP is conducting an analysis to determine the impact to security controls with the introduction of AI systems into a FedRAMP authorized system boundary. We will also spearhead collaboration with the FedRAMP Board, the Office of Management & Budget, the National Institute of Standards and Technology, and the Federal Secure Cloud Advisory Committee to create and gain consensus on the authorization strategy for emerging technologies.”
In the next installments of this series, we’ll take deeper dives into each of these issues and what they mean for our community. We’ll also give special treatment to a topic particularly near and dear to our hearts: fairness in AI, how the EO addresses it, and how it’s integrated into our own development approach.
Stay tuned!
Brittany Roush is a senior product manager at Relativity, working on features like search and visualizations. She has been at Relativity since 2021. Prior to Relativity, she spent over a decade conducting investigations, managing e-discovery projects, collecting data, and leading a data breach notification practice. She has a passion for building better tools for investigators and PMs to make their lives and cases easier (at least partly because her friends in the industry would hunt her down if she didn’t).