When we first started writing about the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” and its impact on the e-discovery industry, we left the impact from a federal government perspective for last. Things were moving rapidly, and we knew the new year would bring new developments.
This article covers those developments in the months since the EO was signed. Let’s dig in.
Reporting Requirements Come Due
In the EO, the Commerce Department was given a deadline of January 28, 2024, to devise a plan mandating companies to report specific details about advanced AI models under development to US officials. The order specified that these details should encompass the computing power employed, data ownership details provided to the model, and safety testing information.
Without getting too technical, the Executive Order set a threshold for reporting based on the computing power that goes into training a large language model. While Google and OpenAI have not yet disclosed the computing power used to train their large language models, it is widely thought to be just under the current threshold. To many, these thresholds seem arbitrary and meaningless, especially as it isn’t benchmarked against real models—but Aron Ahmadia, head of applied science at Relativity, takes a slightly different view: “Another interpretation of the EO is not that the White House has set meaningless thresholds, but that they specifically intend to set the threshold at Google’s current system and their next model.”
Secondarily, U.S. companies will have to inform the Commerce Department when a foreign company uses their resources to train a large language model that crosses the same reporting threshold, to aid in national security efforts. These measures also give the government and the safety institutes that support policy making visibility into what experiments are being conducted within the AI community. As we’ve covered in our own AI Principles, clarity and control is a key component of ensuring fairness.
These reporting requirements are likely applicable to companies that use LLMs and sell products and services to the federal government, and it is not a stretch to imagine that they will have to share their reporting history during the acquisitions process. For e-discovery practitioners, not only may they have to consider their own use of LLMs, but they may need to ensure that their third-party services are also compliant with reporting requirements.
It is not yet clear if this will result in downstream impacts on defensibility, especially when opposing is the DOJ or SEC, but it may be another tool in a practitioner’s toolbox to argue for or against the use of certain technologies in their matters.
Safety Standards
The National Institutes of Standards and Technology (NIST), which resides within the Department of Commerce, is also tasked with defining testing and safety standards for AI, which it must deliver by July 26, 2024. Establishing standards that work across the spectrum of industries that leverage AI is no small feat, as models can serve many purposes, but it’s ultimately the use case that determines the likelihood of harm. As we pointed out in our white paper on sentiment analysis, a sentiment analysis model used for marketing purposes is unlikely to cause a great deal of harm, but when used in an investigative setting it becomes high risk.
Whether the various agencies involved can meet the deadlines set forth in the Executive Order with meaningful, substantial guidelines remains to be seen. The logistical challenges of delivering on everything the Executive Order demands of various agencies, within the aggressive timelines established, can’t be oversold. Not only must already resource-strapped agencies define new standards and implement sometimes net-new programs, but the scope and breadth of the Executive Order necessarily requires cross-agency collaboration to be consistent and effective across the federal government—something the federal government struggles with under the best of circumstances. Agencies such as NIST have raised budget and resource concerns publicly and will need to rely on private industry to participate to establish standards by the deadline set forth in the Executive Order. But that creates a scenario where the fox may be guarding the hen house, as some of the initial participants are among the biggest AI companies in the country.
NIST recently established the Artificial Intelligence Safety Institute Consortium (AISIC), which is a group established by NIST to “unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI).” Six hundred organizations volunteered to be part of the AISIC, with 200 organizations chosen for the first consortium. According to the Commerce Department’s press release, “the consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety. The consortium also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world.”
For e-discovery practitioners, AI safety standards are key to defensibility—and it doesn’t behoove anyone for safety standards to vary across different companies, with differing levels of transparency. Not only does this do a disservice for our customers (many of whom are already trying to grapple with the explosion of AI in the last year) and what it means for their careers, professional licensing, and so on, but it does a disservice to the named parties in a matter, or the people being investigated. Standards means that everyone is playing from the same set of rules, which is ultimately a baseline requirement for equitable and responsible AI.
Uncertainty Looming
The Biden administration worked thoughtfully to address the regulatory and safety concerns that took the nation by storm following the release of ChatGPT and the public’s new awareness of large language models. With the upcoming election, and the United States Supreme Court signaling in early 2023 that they would be taking up the Chevron deference, the Executive Order is cleverly designed to get the most accomplished under both time and potential legislative upheaval. All Executive Order actions have to be accomplished by the end of October—only days ahead of the 2024 election—and the work outlined would be minimally impacted if the Chevron deference is overturned.
Election Proofing an EO (Kind Of)
In 2019, the Trump administration issued an AI Executive Order on “Maintaining American Leadership in Artificial Intelligence.” The primary focus of the order was to expand access to computing resources, prioritize investment in R&D, reduce barriers to using AI, set the foundation for regulation, upskill employees in AI skills, and protect American advantages in AI. At the time, the Executive Order was considered an incomplete, but promising step forward to regulating AI. But the EO fell short on specifics, with no details on how the provisions would be implemented, or how progress would be tracked. Then the pandemic happened, Biden was elected, and all momentum was lost.
Trump’s previous EO is a good indication that even if Trump is elected again in 2024, the Biden administration’s EO wouldn’t be thrown out (at least in whole). While there are some items that would likely be overturned—such as the reporting mechanisms discussed earlier—much of what is proscribed in the EO is in line with the spirit of the Trump administration’s EO. And while a future administration could certainly question or overrule policies that come out of the work, or cut funding to specific programs, the point of the EO is to deeply embed responsible AI practices and safety standards within all aspects of the federal government and drive the public and private sector to do the same within their businesses and organizations. Already state and local governments are picking up the AI safety and regulation mantle, with state- and territory-level AI legislation introduced in places such as California, New York, and the District of Columbia.
Critics of the Biden administration’s EO have largely pointed to the aggressive timelines, the ambition, and the “trying to please everyone” approach, but in some ways the EO acts as a political masterstroke that will ensure that even in the worst-case scenario, AI safety will have a fighting chance. Even the timing of the directives in the EO means that the EO can’t simply be voided, as only a few actions are due after January 2025 (none of which have a meaningful impact on the other requirements).
Future Proofing Against the Supreme Court (Kind Of)
There is also the concern that the Supreme Court will overturn the Chevron deference and, therefore, affect the efficacy of this latest EO. For those of you who aren’t nerds (or at least haven’t sufficiently googled), let’s take a step back and talk about what the Chevron Deference is and why it matters.
In the 1984 landmark case, Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 468 U.S. 837, the Supreme Court define a deference doctrine that outlines the approach courts should take when interpreting statutes administered by federal agencies.
According to Morgan Lewis, the “Chevron deference adopts a two-prong test under which the courts must first decide whether there is ambiguity in the statute. If the statute is found to be ambiguous, the court proceeds to the second prong of the test: whether the agency’s interpretation of the statute is reasonable.” Per the Chevron deference, if a statute is ambiguous or silent on a particular issue, and the agency’s interpretation is reasonable, courts should defer to the agency’s interpretation rather than imposing their own. As an example, under the Chevron deference, courts may defer to the FDA’s interpretation of statutes governing the approval process for new drugs and the content of drug labels. If the FDA’s interpretation is reasonable, it may be given deference in cases where drug manufacturers challenge regulatory decisions.
The Chevron deference has been a subject of legal and scholarly debate, with critics arguing that it grants too much power to administrative agencies, while proponents emphasize the importance of agency expertise in implementing complex statutes. In January 2024, the United States Supreme Court heard two separate cases arising from the statutory interpretation of the Magnuson-Stevens Act by the National Marine Fisheries Service.
“The central question in both cases is whether the Supreme Court should overturn its prior 1984 ruling in Chevron v. Natural Resources Defense Council (Chevron) or, at minimum, clarify whether congressional silence in statutory language constitutes an ambiguity triggering deference to an administrative agency charged with implementing the statute in question,” Morgan Lewis clarifies. A decision will come later this year, but early signs indicate that the Supreme Court is leaning towards overruling the Chevron deference.
While overturning the Chevron deference wouldn’t have immediate effects on the work being done under the Executive Order (likely an intentional move by the Biden Administration, as the Supreme Court agreed to take on the cases in May 2023), AI regulation is on track to be the next major piece of legislation undertaken by Congress.
Justice Elena Kagan asked about the potential impact on AI regulation while hearing arguments for Relentless, Inc. v. Department of Commerce. She expressed concern that overruling Chevron would force Congress to be extremely in-the-weeds on any forthcoming AI legislation, because Congress wouldn’t be able to rely on the expertise of the various agencies, which would ultimately lead to significant gaps in AI regulation.
“Congress knows that there are going to be gaps because Congress can hardly see a week in the future with respect to this subject, let alone a year or a decade in the future,” she emphasized.
“Artificial intelligence,” James Paretti of Littler Mendelson PC said, “is a perfect example of the conundrum” that is facing the courts in regard to whether Chevron should be overruled. It is “an incredibly complicated issue not conducive to broad, one-size-fits-all fiats enacted by people who are admittedly not experts in the nuances of the field … So it’s either ‘dive into the details and likely get them wrong’ or ‘leave it to the agencies whose rules may not be given much credit by the courts,’” he added.
Because the EO is focused on immediate steps that can be taken at an administrative level, which is not interpreting any gaps in legislation, it is unlikely that agencies will need to invoke the Chevron deference at any point. That will change as soon as Congress enacts AI regulations, but those are future problems for now.
Invoking the DPA
When President Biden signed the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” he invoked the Defense Production Act (DPA), a Korean War-era law that gives the president broad-sweeping authority to expedite and expand the supply of materials and services from US industry in the interest of national security. The Executive Order authorizes the Commerce Department to set guidelines for, and collect reports from, tech companies that train and test highly capable AI models.
Historically, the DPA has been used to expedite military equipment, but Biden has used the DPA for a number of non-military use cases, from producing more infant formula to combatting the pandemic. Bidens’s justification for invoking the DPA is the ongoing threat to US critical infrastructure from cybergangs and nation-state hackers, using technology and techniques enhanced by AI. The administration has otherwise declined to comment on why they decided to use the DPA, and tech companies are already mounting a defense against the new requirements.
Not only are congressional members who oppose AI regulation now pushing to allow the DPA to lapse when it comes up for renewal in 2025, but tech lobbyists are actively pushing to dismantle the Executive Order’s provisions backed by the DPA. According to Politico, “Americans for Prosperity Foundation, a nonprofit founded by the Koch brothers, has filed two Freedom of Information Act requests and a lawsuit against the Commerce Department and its National Institute of Standards and Technology in a federal district court, demanding agency records on the DPA and artificial intelligence.”
NetChoice, the U.S. Chamber of Commerce, and the Software & Information Industry Association—which represents some of the largest AI companies such as Meta, Amazon, and Google—expressed several concerns about the Executive Order, including that it will stifle innovation and prevent new companies from entering the market.
The idea that regulation will stifle innovation is not a new argument, and it’s one used routinely in the AI industry in particular.
“The fact that we’re starting to see pushback on the EO is not surprising from trade groups saying it’s too broad and could impede innovation,” Tom Romanoff, director of the Technology Policy Project at the Bipartisan Policy Center, told FedScoop. “But everyone sees there’s a need for regulation to happen and both parties on [Capitol] Hill have been supportive, received it well.”
If successfully challenged, removing the authority of the DPA would impact the speed at which some of the order’s activities can be accomplished, and in cases of reporting requirements, may impede progress entirely. However, as with the other issues discussed, it won’t entirely dismantle every provision within the EO if it’s determined Biden can’t invoke the DPA for this purpose.
Tying It All Together
If the United States is able to deliver on the Executive Order, AI safety will be front and center in future AI development, mitigating some of the existential threats that AI poses to humanity. For e-discovery practitioners, keeping an eye on the changing guidelines, policies, and regulations will give them a leg up on managing impactful changes to their workflows and service lines, ultimately setting themselves up for success.
The U.S. is not the only nation trying to reconcile governmental policies and regulations with the reality of AI. Keep an eye on The Relativity Blog for coverage of the upcoming EU AI Act, and the global struggle to align AI regulation cross-nationally.
Brittany Roush is a senior product manager at Relativity, working on features like search and visualizations. She has been at Relativity since 2021. Prior to Relativity, she spent over a decade conducting investigations, managing e-discovery projects, collecting data, and leading a data breach notification practice. She has a passion for building better tools for investigators and PMs to make their lives and cases easier (at least partly because her friends in the industry would hunt her down if she didn’t).