The buzz around AI can make it difficult to discern the reality of the technology (and its implantation) from the hype. Fortunately, there are some very bright minds in the legal world who are way ahead of these questions—and ready to help us all answer them.
Our chat with AI Visionary Aurélie Jacquet fills you in on everything you need to know, from what people are getting wrong about AI to why it’s so important to get it right. She’s squashing the AI myths left and right. Let’s dive into some of the juicy ones.
Myth #1: AI is Bad
Aurélie believes that AI isn’t good or bad—it just is. “We still have this conversation where ‘AI is good’ or ‘AI is bad.’ But AI is what we make of it.”
No doubt there’s risk involved in advanced technology, but it is particularly important that we make sure we understand it to stay in control. Aurélie made a flourishing career out of AI, one that allows her to wear several hats.
In 2018, when AI was still fresh in the world, she pushed for best practices to be put in place to help organizations understand how to use the technology correctly. She wrote the proposal for Australia to develop standards on AI. That’s how she became the chair of the Australian Standards Committee, which is now participating in the development of international standards on AI. She’s also an independent consultant and a principal research consultant for CSIRO’s Data61, an expert for the OECD and the Responsible AI Institute, part of the AI National Expert Group, and on the advisory board of the UNSW AI Institute.
All these roles focus on helping organizations, locally and internationally, implement AI responsibly and understand what good governance means in the age of AI.
“AI is a great tool as long as you know how to use it,” Aurélie told us. And she’s made it her mission to help others learn how to use it well.
Myth #2: We Should Be Scared
There’s an AI “hype cycle” going down for many of us right now—one which takes us from “extreme fear of the technology to extreme interest” in it. Aurélie seeks to mitigate this kind of thinking: “There’s a lot of interest in technology, but we’re still stuck in that hype cycle. And it’s not just in one region; it’s a broader attitude.”
Aurélie advocates shifting our perspectives toward a “more nuanced vision of AI.” She suggests that AI is an ever-evolving tool we are responsible for, and for which we need to consider and update our existing risk management practices.
“We’re not throwing everything into the bin—existing practices we have remain important and relevant. We need to understand how our existing practices need to be adapted so that we can manage and scale AI. So it is not about re-inventing the wheel; it is about making sure we have upgraded the wheel appropriately,” Aurélie explained.
“Let’s take privacy as an example. For AI systems, privacy can be a challenge because data is embedded in the algorithm, and that increases privacy risks. So existing privacy assessments are still relevant, but when it comes to defining controls, there are new considerations that comes in. How do you manage deletion and access requests? That’s why in compliance we need to upskill and come to better understand the technology, so we have the right controls in place.”
Myth #3: Everything Has Changed
While AI swooped in all shiny and new, some things remain the same (though Taylor Swift and Ed Sheeran wrote a song that begs to differ). As Aurélie mentioned above, the arrival of AI does not validate throwing everything else away—existing practices are still important and relevant.
In addition to her example concerning privacy issues, Aurélie painted us another picture of the power of traditional qualities in the practice of law: “The idea that ethics for AI is new—something completely new—is a bit of a myth. Equity, fairness; lawyers are very familiar with these concepts and are well positioned to learn and help organizations understand their responsibilities.”
She continues the compliment, “Lawyers are particularly skilled at asking the right questions, and knowing that, if an organization cannot explain how they use and stay in control of a technology that can significantly impact people adversely, they should not be using it, as they remain accountable. That concept has not changed.”
Aurélie’s advice here is to—as with any new technologies—proceed with caution, but also with a healthy dose of agency. She praises lawyers’ ability to responsibly question what’s put in front of them, paying careful attention to how existing processes need to change so that we can understand how to manage AI and scale it.
Myth #4: There Are People for That
AI is cool, but that’s not really a part of my job description. There are people for that.
Don’t fall into that thinking. You are “people.” According to Aurélie, managing AI needs to be an interdisciplinary journey. Every part of every team should not only care about AI, but actively consider how to uplift existing processes and optimize controls.
She puts it like this: “You need to understand what the technology’s good at/not so good at; what data you have; what processes the algorithms have been optimized for; and what your risk appetite is/risk management processes look like. Based on all that, and knowing the business problem you want to solve, you can evaluate—like any other technology—where it makes sense to use AI.”
What does this look like for you? Consider how AI is used as part of your work—and if you think you’re on the bench for this one, you’re doing your organization (and yourself) a great disservice.
In addition to the professional development necessities of keeping pace with AI and learning to apply it well, Aurélie focuses strongly on the obligations of meeting clients’ and communities’ needs and expectations when it comes to the use of new technologies.
“If you think about privacy, you have to consider the privacy rights of individuals as set out in the law. But compliance professionals also need to respond to the community’s expectations,” she told us. “There’s a lot of learning to be done here. The law is always a good place to start—and should be the very first place to start when we talk about responsible and ethical AI systems—but then there’s the question of how we respond to communities’ expectations.”
When Aurélie put it this way, it became clear to us that lawyers who thoughtfully investigate the best uses for AI—not just in terms of efficiency and tech savviness, but in the context of going above and beyond the stated needs of their clients and case teams—is what separates truly great practitioners from the rest.
Celia O’Brien is a member of the marketing team at Relativity where she serves as a copywriter.