Government legal and compliance teams are being asked to do more than ever—faster, more transparently, and with complete defensibility.
Litigation, FOIA, investigations, and oversight functions are all expanding in scope. But the systems supporting them have typically evolved in isolation: procured at different times, for different purposes, and under different constraints.
Recently, I had the opportunity to speak with practitioners across government—including FOIA officers, investigators, litigators, and records managers—at the DGI EDRM Conference, where a consistent theme emerged: the current challenge is not rooted in the expertise inside government, but the operational infrastructure supporting it. These conversations, along with themes explored in my session, reinforced a shared reality.
When matters span multiple functions (and most do), teams are forced to duplicate effort, recreate decisions, and operate without shared visibility, largely because the systems supporting that work don’t operate together.
For years, leaders across government, including CDOs, CIOs, CAIOs, have been working toward a different model, one also reflected in the discussion at DGI: a more connected, coordinated operating environment where missions remain distinct, but the underlying infrastructure is shared.
That possibility is no longer theoretical.
As agencies explore modernization and, increasingly, the role of AI, a familiar set of assumptions continues to shape the conversation. Many are grounded in real constraints. But some are now limiting how agencies think about what is actually achievable.
Below are a few of the most common, and where they begin to shift.
Myth: Litigation, FOIA, and investigations are too different to live on one platform
At a surface level, this assumption makes sense. Each function operates under different rules, timelines, and oversight structures. In fact, in many cases, they should remain operationally distinct.
But complete separation at the system level has created its own challenges: duplicated data, inconsistent standards, and limited visibility across related matters.
The trick is to shift your paradigm from assuming that a shared platform will force all functions into a single workflow. Instead, leveraging a single platform for all of these projects presents an opportunity to create a shared operating environment—where data, governance, and prior work can be reused across missions.
While these missions are different and require strategic, focused work, the larger challenge for public sector legal teams is that they have been supported in isolation for too long.
Myth: Modernization increases risk
In government environments, this belief exists for a reason. Change introduces disruption, requires retraining, and often depends on complex acquisition pathways.
Maintaining the status quo can feel safer because it is known and already approved. But that assumes current systems are low risk.
Unfortunately, that just isn’t the case for every team. In practice, fragmented tools, manual processes, and inconsistent workflows can make it harder to maintain oversight or apply standards uniformly.
It’s important to note that modernization does not automatically reduce risk; poorly executed efforts to update tools and workflows can increase risk exposure. But when approached incrementally—aligned to existing compliance frameworks and focused on improving consistency and visibility—thoughtful updates can support reducing risk in meaningful ways.
The question modern legal teams should be asking is whether current systems are creating outsized risk compared to the functions they perform. If so, what are the most obvious opportunities to minimize those exposures?
Myth: Investigations cannot be both fast and defensible
Speed and defensibility have traditionally been treated as a tradeoff. But in many cases, that tradeoff is driven less by the nature of investigative work and more by the systems used to support it.
When workflows are fragmented, teams are forced to reconstruct decisions after the fact.
On the other hand, when workflows are structured to capture decisions, context, and changes as they occur, speed and defensibility become less of a tradeoff—and more of a design choice.
Myth: FOIA is just too manual to modernize
FOIA processes are complex for plenty of legitimate reasons: legal requirements, variability in requests, and the need for careful review.
But that complexity is often compounded by disconnected systems across intake, tracking, review, and production.
So, while modernization cannot remove FOIA’s complexity, it certainly can help address the inefficiencies created by fragmentation. When agencies, while preserving compliance, improve how information flows across these steps, they reduce redundant work and improve consistency.
Myth: AI is not secure enough for government work
Security concerns around AI are valid, particularly given the sensitivity of government data. However, much of the skepticism may be based on exposure to public, consumer-grade tools not designed for regulated environments.
AI in government is not a single capability. It can be deployed within accredited environments, aligned to FedRAMP and agency-specific controls, and configured with strict access boundaries.
AI, of course, does not eliminate risk—but it does shift it.
Agencies must weigh the risks of adopting AI alongside the risks of continuing to rely on manual, fragmented processes where data is inconsistently governed. Choosing the right tools is, naturally, the first layer of protection. Guarding against shadow AI is another important strategy.
Security best practices can govern how and where AI is implemented, but they don’t necessarily preclude the technology altogether.
Myth: You need to be a prompting expert to use AI effectively
The rise of generative AI and LLMs brought the use of sophisticated tools into the hands of everyday users but introducing the ability to engage with powerful algorithms using natural language conversation rather than technical queries. But while these conversational workflows are simpler in principle, optimizing one’s use still requires sound prompting practices and a bit of hands-on knowledge around how each tool performs in response to different types of inputs.
Today, many AI tools still depend on user skill to produce reliable outputs. Realistically, this can create variability—and limit scalability for teams with diverse skill sets.
In government environments, that’s not sustainable. For AI to work at scale, it must be embedded within structured workflows—with guardrails, predefined context, and repeatable processes.
Fortunately, the tech is evolving every day and prompting is getting more straightforward (and even automated) with time. But even so, good documentation, shared best practices, and intentional collaboration make a big difference in making AI workflows more accessible across teams.
You don’t have to turn all legal practitioners into AI specialists; enabling education that focuses on consistent, reviewable outcomes regardless of who is using the system is the key.
Myth: AI use will not hold up in court
Courts require decisions to be explainable, traceable, and defensible. AI may not inherently meet that bar—but it certainly can.
What matters is how it is used.
When AI is purpose-built for legal and public sector work, it can preserve audit trails, maintain clear links between inputs and outputs, and require human validation—all of which support defensible outcomes.
When it is used as a black box, it cannot.
Again, this one comes down to choosing the right AI tools for your mission. The question is not whether AI belongs in legal workflows at all, but whether its design and implementation support transparency and verification.
What This Looks Like in Practice
The AI strategy emerging among many public sector teams is not a single system replacing all others, nor a single workflow imposed across functions. It is a shared platform architecture that allows FOIA, litigation, investigations, and privacy workflows to remain distinct, but operate on a common data foundation. This approach provides teams the ability to reuse prior work across their matters and missions.
This is where the model begins to shift. A FOIA response is no longer an isolated effort; instead, it creates structured knowledge that can be used and leveraged by other teams. That same information informs related investigations, a regulatory inquiry, a litigation case, or even a subsequent FOIA.
At DGI—and in conversations across agencies—we saw clearly that shift underway. AI will play a role, but it is not the starting point. Infrastructure is.
We will continue unpacking these ideas at the Relativity Public Sector Forum, where public sector leaders will explore how these challenges are being addressed in practice across litigation, FOIA, investigations, and beyond.
If you are starting to question some of these assumptions in your own environment, you are not alone—and the path forward is becoming clearer.
Brian Thompson is director of practice empowerment for the public sector at Relativity.





