Editor’s Note: This is the second article in a series we’re writing about the recent Executive Order on artificial intelligence. Today’s piece takes a closer look at the issue of fairness in AI—a topic that’s near and dear to our hearts.
In the pursuit of a more just and equitable future, a significant step has been taken on the global stage of technology and governance. The recent “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” underscores the critical juncture we face as a globalized community: as we strive to harness the wonder and power of AI, it is imperative to safeguard historically disadvantaged groups and other vulnerable populations.
In this regard, the Executive Order serves as a testament to the dedicated efforts of civil rights activists and AI researchers alike, and their shared vision for fairness, transparency, and ethical responsibility within the continuously evolving landscape of AI.
The e-discovery industry has been tasked with a critical role in the fight for fairness. Indeed, the Executive Order stipulates that the US government will “ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis” (emphasis added).
All of this being said, there are still many unknowns about what “fairness” even means, so let’s start there for today.
What Does Fairness Mean?
In the context of AI, fairness refers to the ethical and unbiased treatment of individuals or groups across various demographic categories, such as race, gender, age, socioeconomic status, and more. It involves ensuring that AI systems do not exhibit discriminatory behaviors, unintentionally or otherwise, and that they provide equitable opportunities, benefits, and outcomes for all users.
Achieving fairness in AI is a complex goal that involves addressing several key considerations:
- Bias Mitigation: Striving to eliminate biases present in the training data or algorithms to prevent discriminatory outcomes. This involves identifying and rectifying any unfair treatment or disparities in how the AI system responds to different groups.
- Transparency: Creating transparency in AI systems so that their decision-making processes are understandable and interpretable. Users should have insight into how decisions are made, allowing for accountability and trust.
- Representation: Ensuring diverse representation in both the development teams creating AI systems and the data used to train these systems. A lack of diversity can lead to the perpetuation of biases and unfair outcomes.
- Equitable Access: Making AI systems accessible and beneficial to all individuals, irrespective of their background or characteristics. This includes addressing issues related to digital divides and trying to prevent AI from exacerbating existing social inequalities.
- Continuous Monitoring and Improvement: Regularly assessing and improving AI systems to identify and rectify any emerging fairness issues. This ongoing process is crucial to adapt to evolving challenges and changing societal norms.
Striving for fairness in AI is not just a technical challenge but also a socio-ethical one, requiring collaboration between technologists, ethicists, policymakers, and various stakeholders to create systems that prioritize equity and avoid reinforcing existing societal biases.
Responsible AI Will No Longer Be “Optional”
Over the last few years, the discussion around responsible AI has been largely driven by “shoulds.” We (the global AI community) should be concerned about bias. We (the global AI community) should strive for fairness. We (the global AI community) should be held accountable for model outcomes. With numerous provisions for fair and responsible AI, particularly in the areas of civil justice, law enforcement, labor, healthcare, innovation, competition, and transportation, the Executive Order—especially in context of the EU Artificial Intelligence Act and anticipated UK, Japanese, and Australian regulation—sends a clear message that we’ve moved on from “should” and are now living in a world of “must.”
In a world of “must,” it’s important to stress that in e-discovery, responsible AI principles are not just for e-discovery technology companies. It’s just as important that law firms, corporations, and service providers arm themselves against the misuse of AI. Given that there are no existing official standards for AI today, organizations who take a principled approach to AI are more likely to future-proof their businesses so they can pass muster if scrutinized by a government agency or a judge, particularly if they’re using AI right now. (Note: As government agencies are within the purview of the EO, we are intentionally omitting them from this discussion.)
There are two parallel but concurrent pathways that organizations should consider taking: establishing responsible AI principles and introducing an AI risk framework, such as the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST).
Responsible AI Principles for All Organizations
Responsible AI principles are a set of guidelines and ethical considerations designed to ensure that AI systems are developed, deployed, and used in a manner that aligns with human values; promotes fairness, transparency, and accountability; and minimizes potential harm. The specific principles may vary slightly among different organizations and frameworks, depending on the industry and type of work they do.
In short, responsible AI principles act as a guidepost to build internal policies and procedures. There are many advantages to adopting these principles, but we’ve outlined a non-exhaustive list of benefits for e-discovery organizations.
- Ethical Compliance: Developing responsible AI principles aligns with ethical considerations in the legal profession. Legal practitioners have a duty to uphold the principles of justice, fairness, and equity. Establishing guidelines for responsible AI ensures that the use of technology in legal services aligns with these ethical standards. For service providers and corporations, these principles can guide their acquisition processes to reduce the potential for law firm customers and internal legal teams to use AI that doesn’t align with responsible AI principles.
- Client Trust and Confidence: Clients place a significant amount of trust in their legal representatives and their vendors. Demonstrating a commitment to responsible AI practices helps build and maintain client trust. Clients are likely to be more comfortable with the use of AI knowing that AI technologies used in their legal matters adhere to ethical standards and prioritize fairness. This is likely to become a competitive advantage, leading into our next point.
- Competitive Advantage: As responsible AI practices become more critical in various industries, including e-discovery, organizations that embrace and communicate their commitment to responsible AI are likely to gain a competitive advantage. Given the presumed care and consideration that goes into building models in a responsible AI framework, clients will be more confident that productions resulting from the use of AI in review will be more likely to be considered defensible, reducing the risk of sanctions. For organizations that work in the pro bono sphere, a commitment to avoiding of bias and discrimination is a selling point.
- Risk Management: We’ve discussed the risks that AI poses at length, and establishing responsible AI principles is a first step to identify, assess, and manage those risks effectively.
- Long-Term Viability: The responsible use of AI is likely to become an increasingly important aspect of corporate responsibility and sustainability. Establishing principles now positions law firms and other e-discovery teams for long-term viability by ensuring they are prepared to meet evolving ethical and regulatory standards.
AI Risk Management Frameworks
Responsible AI principles and AI risk management frameworks are related concepts, but they serve distinct purposes and address different aspects of the development, deployment, and use of AI systems. Where responsible AI frameworks guide the overall approach to AI development with a focus on ensuring that AI benefits society and minimizes harm, AI risk management frameworks are a specific and detailed set of procedures and practices designed to identify, assess, mitigate, and manage risks associated with AI systems. The focus is on addressing potential negative impacts, vulnerabilities, and uncertainties in the development and deployment of AI technologies.
Implementation of a framework and building corresponding procedures and policies is essential to ensuring an organization is accountable for adhering to their responsible AI principles. After all, actions speak louder than words.
There are several different frameworks that exist to help guide the responsible development of AI, but the EO specifically references the NIST AI Risk Management Framework published at the beginning of 2023. This comprehensive framework has been adopted globally since its publication, in both the public and private sectors. NIST is set to play a major role in the future of AI as the designee for identifying and establishing standards to achieve the directives outlined in this Executive Order.
Over 2,300 AI professionals from the private and public sectors joined a working group to create the NIST AI RMF, creating a comprehensive but surprisingly short (only 48 pages!) set of guidelines. The Framework is divided into two parts. Part 1 discusses framing AI risks and the intended audience, emphasizing trustworthy AI characteristics. Part 2, the “Core,” introduces four functions (“govern,” “map,” “measure,” and “manage”) to address AI risks, applicable across organizational processes with “govern” and specific AI contexts and lifecycle stages with “map,” “measure,” and “manage.”
It is not unreasonable to assume that the NIST AI Risk Management Framework will become the de facto framework for entities within the United States, particularly if its next iteration takes the EO into consideration, with specific guidelines to adhere to different agency requirements. If they haven’t already, organizations that sell AI-based solutions are likely to begin to field AI risk assessments based on the Framework that seek to understand the use of AI in their product by prospective buyers.
Many of the benefits to e-discovery providers for adopting an AI risk management framework are the same as adopting responsible AI principles, so I’ll spare my beleaguered editor (Hi, Sam!) from an even higher word count. But there are some advantages that are specific to adopting a framework:
- Consistent and Defensible Practices: Following a structured risk management framework provides organizations with documented, defensible processes. This is crucial in e-discovery, where legal processes and findings may be subject to high-stakes scrutiny or challenge.
- Adaptability to Evolving Technologies: The e-discovery landscape is dynamic, with new technologies continually emerging. An AI risk management framework helps organizations stay adaptable to technological advancements, ensuring that AI tools are used responsibly and effectively in the ever-changing legal tech environment.
- Improved Data Quality and Accuracy: AI systems play a crucial role in automating document review and analysis in e-discovery—and they have for some time, long before the release of Relativity aiR and other generative AI solutions. A risk management framework simply ensures the accuracy and reliability of AI-driven processes, enhancing the quality of information used in legal proceedings.
- Cost Savings: By identifying and mitigating risks early in the e-discovery process, organizations can avoid costly legal disputes, errors, or rework. This contributes to overall cost savings in legal operations.
- Efficient and Streamlined Processes: An AI risk management framework helps organizations optimize e-discovery workflows by providing a systematic approach to identify, assess, and manage risks. This leads to more efficient and streamlined processes in handling electronic evidence.
- Increased Trust and Confidence: Clients and legal stakeholders place a high value on the reliability and integrity of e-discovery processes. Implementing a risk management framework demonstrates a commitment to responsible and ethical AI use, fostering trust and confidence among clients and partners.
Enforcement Concerns
For all the Executive Order’s focus on fairness and responsibility, enforcement of responsible AI remains largely a concern. As Dr. Nicol Turner Lee points out, the EO doesn’t provide definitions of what constitutes a “violation,” or who is responsible for defining if enforcement is needed, or even what enforcement means. The EO is also lacking in defining what recourse is available for individuals harmed by AI.
Dr. Turner Lee further notes: “The U.S. Department of Justice and other federal agencies have an obligation to extend their authority into this space as per the EO. Yet, it is still unclear how and when this will happen—all while critical systems wreak havoc on the participation of vulnerable populations and their communities.”
Experts are also concerned that standards called out as voluntary or only to be enforced within the government—such as the use of digital watermarking—lack “teeth.” While this concern is valid and action by Congress to formalize regulations around the use of AI is sorely needed, the e-discovery industry will play a unique role in driving adoption of standards such as digital watermarking. All it takes is for the DOJ, SEC, FTC, or any other investigative agency to require the standards in productions and the private sector will be forced to adopt them. That’s especially true if companies such as Microsoft, Google, and other LLM creators want their products to be used.
On the flip side, there is one notable enforcement measure inherent in the EO: the President invoked the Defense Production Act—the same act used to develop COVID-19 vaccines during the height of the pandemic—to make companies prove that their most powerful AI systems are safe before releasing them onto the market. Per President Biden in his remarks about the Executive Order, “That means companies must tell the government about the large-scale AI systems they’re developing and share rigorous independent test results to prove they pose no national security or safety risk to the American people.”
While ultimately dependent on the implementation details, the proposed safety testing could potentially prove to be a significant step forward in ensuring that key tenants of responsible AI principles, such as bias mitigation, security, and trust, are upheld across organizations.
Do the Right Thing (Not Just the 1989 Classic Film by Spike Lee)
Later in his remarks President Biden said, “Look, privacy is not the only right at risk. Without the right safeguards in place, AI can lead to discrimination, bias, and other abuse.”
For all the business reasons why organizations should adopt responsible AI principles and risk management frameworks, I would be remiss if I didn’t encourage adoption for one simple reason: it’s the right thing to do.
When ChatGPT was released, as a global society, we found ourselves on the edge of a precipice: the choices we would (and will) make in the next two to three years could have major lasting consequences on virtually every aspect of our lives. We already had proof that the criminal justice system could be negatively impacted by AI (some examples: COMPAS, Issues with Facial Detection, the Dutch Childcare Benefits Scandal, Issues with Predictive Policing and AI), and with the rush to adopt LLMs, the potential for harm was exponentially expanded.
The e-discovery industry sits at the intersection of civil and criminal law, technology, and the concept of fairness. Our work products are often what decide case outcomes. We have a great responsibility to safeguard fairness for the plaintiffs and defendants we ultimately serve.
Building safeguards for fairness within the criminal justice system is no small feat. At Relativity, building these safeguards takes the cooperation and collaboration of all our employees, our partners, and our industry at large. We all come together under a common goal. It requires not just technical knowledge, but an immense amount of openness and even vulnerability. For many involved, building responsible AI means coming face-to-face with some horrible truths about how marginalized communities have been treated historically—particularly within the criminal justice system—which can challenge worldviews, and lead to having to make tough business decisions on how to proceed with a technology or a solution.
It is not easy, or simple, but doing the right thing has never been easy or simple. Ultimately though, it’s imperative that we embrace this challenge because, as President Biden pointed out, fundamental human rights are the table stakes.
“Always do right. This will gratify some people and astonish the rest.” – Mark Twain
Brittany Roush is a senior product manager at Relativity, working on features like search and visualizations. She has been at Relativity since 2021. Prior to Relativity, she spent over a decade conducting investigations, managing e-discovery projects, collecting data, and leading a data breach notification practice. She has a passion for building better tools for investigators and PMs to make their lives and cases easier (at least partly because her friends in the industry would hunt her down if she didn’t).