Hype aside, there are risks with AI—and they aren’t all limited to individual cases. Issues around data privacy, evolving regulations, and deepfakes are real and require careful attention. How can firms navigate these waters mindfully, adhere to grounding principles in their pursuit of innovation, and reassure clients that they’re on the proper path?
It certainly helps to find software providers with AI philosophies grounded in a commitment to building fit-for-purpose, secure, and intentional tech for legal teams. But in the hands of those with access to extraordinarily sensitive data, the sound implementation of artificial intelligence is as important as its development.
Many of our AI Visionaries have a lot of intelligent insights to share on the responsible use of AI in the legal industry. Today, we’re sharing those insights from three members of our 2024 cohort:
- Gail Gottehrer, Vice President, Global Litigation, Labor & Employment, and Government Relations, Del Monte Fresh Produce
- Jonathan Prideaux, Head of Applied Legal Technology, King & Wood Mallesons
- Lenora Gray, Data Scientist, Redgrave Data
Keep reading to learn from these standout leaders in the legal space.
Approaching the AI Bandwagon with Temperance
The need to get on board with AI in one form or another is becoming more acute, as firms and organizations across sectors begin to leverage this technology—and outpace any competitors who lag behind the times.
That said, the practice of law and handling of clients’ most sensitive data warrants a more thoughtful approach than simply jumping on a runaway bandwagon.
“As a big fan of technology, I’m concerned about the speed at which generative AI has been adopted and the potential backlash. We’re in the very early days of the evolution of generative AI and don’t yet know exactly how it works, why it hallucinates, and what data privacy and cybersecurity risks are created by different generative AI offerings,” Gail noted. “Before using generative AI, companies, especially those in regulated industries, need to take the time to understand how it works and whether the benefits to the business outweigh the risks of using a particular generative AI product. Generative AI has tremendous potential, but guardrails need to be put in place before companies can feel comfortable using it.”
Lenora agreed: “The increased accessibility of generative AI could indeed herald an era of more imaginative solution-making. However, I am concerned about potential misuse, especially in areas like deepfakes, misinformation, and intellectual property violations. There’s a significant need for rigorous data curation and bias mitigation to ensure that the outputs of generative AI are fair, ethical, and reliable. “
This requires stakeholders to carefully evaluate their many technology options, kicking the proverbial tires to better understand its reliability, how its models have been trained, whether developers engineered it to avoid bias and mitigate for hallucinations, what purpose it was originally designed to serve, and what best practices its creators can recommend for everyday workflows users will ultimately leverage in the field.
This is essential for corporate entities which aim to bring technology in-house for their own use. But it’s equally important for law firms and service providers looking to onboard AI on behalf of their clients.
“It is incumbent on legal technology professionals to explain exactly what AI is being used for, what the checks and balances in place are, and what the limitations are,” Jonathan added. These teams need to ensure the security of their clients’ ESI, in addition to leveraging technology in the pursuit of better understanding that data: “We have a duty to ensure that a client’s data is only used for the purpose for which it has been provided.”
Particularly with generative AI, which has received so much interest and iteration so quickly after its emergence in the field, Jonathan explained that there must be “a period of determining the most effective and appropriate applications that can be implemented whilst managing all of the potential risks and privacy concerns. Generative AI is different from previous iterations of AI in its ability to do creative tasks, so care needs to be taken when relying on that created content.”
Lenora advised even legal professionals who don’t consider themselves “technologists” to embrace this process.
“Even when they may not have the technical expertise to build AI systems, AI users can hold software and service providers accountable through informed oversight and critical questioning,” she said. Lenora advised legal practitioners to ask software providers and hosting partners questions like:
- What data is the AI using?
- How is this data sourced and processed?
- How does the AI system comply with relevant laws and regulations?
- Are there any limitations or uncertainties in the AI’s performance that we should be aware of?
- What training and support do you provide for users who are not technically savvy?
Keeping these suggestions in mind, Jonathan explained that caution is useful, but fear is not.
“AI exists and cannot be un-invented. Look at it in the same way as any other new tech over the last 30 years,” he said. “Imagine if you had not adopted email or mobile devices. It is okay to be cautious, but you can’t ignore it.”
The Importance of Diverse Perspectives when Evaluating AI
“There are a lot of different forms of AI and it’s critical to determine which ones work for you and your business,” Gail advises our readers. “Educate yourself about AI by attending CLE programs, meeting with vendors, and asking questions. Create a task force or working group comprised of AI enthusiasts, AI skeptics, and employees who don’t have an opinion about AI to learn about the technology, monitor developments, and advise your organization.”
She’s not alone in that suggestion. Many of our AI Visionaries emphasize the importance of not putting all of one’s eggs in a single basket. That means gathering input from plenty of folks when choosing from different AI tools, leveraging different workflows as the needs of each project and client demand, and frequently reevaluating your goals and how satisfactorily they’re being met by your tech strategy.
“AI in e-discovery is primarily used to augment human capabilities, not replace them. Human judgment, interpretation, and decision-making, especially in nuanced legal contexts, remain crucial,” Lenora added. “Humans provide the contextual understanding, ethical considerations, and critical thinking that AI currently cannot replicate.”
Gail said she’s seen, over the course of her career, how technology can be a great asset in the realm of litigation.
“As a class action defense lawyer, representing companies with large volumes of data who were sued by plaintiffs who often had no data, I saw how discovery costs could make it prohibitively expensive for companies to litigate cases,” she told us. “The development of technology-assisted review (TAR) and other innovations has given attorneys the tools to control discovery costs and manage litigation efficiently.”
Nevertheless, the impacts of AI can be widespread in good ways and bad—so establishing a firm understanding of its strength and weaknesses is key not just to protect specific outcomes, but to lay a foundation of openness to innovation moving forward.
“Technology is an area filled with new ideas and endless possibilities. That’s part of what makes it so interesting,” Gail said. Her advice to attorneys who want to embrace it?
“Don’t be deterred by the fact that there’s no clear answer about how regulators may view an emerging technology or that there’s no case law directly on point. See that as an opportunity to lead the way,” she said. Leaning on one’s team, and as many opportunities to educate yourself on AI as possible, can help make that aspiration a reality.
Protect the Human Element, Even While Using AI
Jonathan encouraged lawyers—new and experienced—to lean on the skills they’re already building when evaluating AI and how to embrace it now and into the future.
“Softer skills such as good communication, influencing, people management, running meetings, and client relationship management are equally as important as technical skills. Things don’t always go as expected, so learn to adapt and deal with problem situations in a positive way,” he encouraged. “The greatest lessons come from those things that go wrong. You are remembered more for how you manage a crisis than for all the projects that run smoothly.”
As with every area of legal services, clients look to their lawyers and support teams for advice on the best ways to tackle complicated, high-stakes, and sometimes confusing challenges. The best ways to leverage AI in the course of bringing each matter to a resolution should be just another element of those substantive, strategic conversations that help you stand out as a forward-thinking, effective practitioner.
“When a client engages us, they are looking for technical experts to supply a technology-forward solution. Our clients typically understand the value and efficiency that AI-powered tools can bring to their legal matters,” Lenora said. That doesn’t mean it’s always an easy process, though, to adapt to new tools and workflows. “For those who may be initially hesitant or unfamiliar with AI applications, we offer education, demonstration, and a collaborative approach to our offerings,” she continued.
All of these AI Visionaries emphasized how the human touch—curiosity, critical thinking, strategy, the prioritization of what’s right—remains the most important part of pursuing justice and discovering the truth in each matter.
“The combination of responsible AI and lawyers who have the technological competence to maximize it will enhance the practice of law,” Gail said. It’s a bright future for those practitioners who are ready and willing to embrace AI as a collaborator, not an easy button or a replacement.
Sam Bock is a member of the marketing team at Relativity, and serves as editor of The Relativity Blog.