If you’ve ever found yourself in a cold sweat about potential breach incidents—we hate to confirm that it might be for good reason. Data breach has made a common and costly name for itself, and the professionals who analyze it, who stare it in the face as a career, are urging companies to have strong cyber risk response plans.
Just to set the mood here, there have been 28 million data breach victims in the past year and the cost of each breach event can be upwards of $400 million, depending on the scope. With the average cost for compromise in 2022 at $9.5 million, these numbers don’t just prove the damage—they prove no one is in the clear. Attackers are aiming for higher value data, and more global organizations have been impacted than ever; that means data from all over the world in many different languages. It’s a mess no one is exempt from getting tangled up in.
At Relativity Fest 2022, Scott Milner, partner at Morgan, Lewis & Bockius LLP, warned, “It’s not if you’re going to be breached, it’s when you’re going to be breached. Metrics show that now.” He asked the question we’re all thinking: “How do you prepare for it?”
While a data breach may be unavoidable, having a thorough, calculated response plan is entirely in our hands. Scott, along with his fellow panelists, shared tips for how to prepare.
First Things First: Determine the Scope
“There’s no such thing as perfection,” Scott said. Instead, he recommends aiming for a “reasonable approach” focused on understanding the impact of the breach, whether any personally identifiable information (PII) was included in the compromised data, and who it impacts. Much of that information can be gathered from your client early on, so when you’re facing a breach, don’t just dive in … even if your clients are pushing you to jump in head first.
“[Getting early info] completely changes the approach to the data mining exercise. Our job as outside counsel is to take time to explain how [early understanding] has a big downstream effect,” Scott said.
“I always tell my clients, ‘If we have to, we can figure out [where to look], but it’s going to be slow and it’s going to be expensive,’” added Kelly Atherton, senior manager of cyber incident response at Norton Rose Fulbright.
The secret sauce is asking for the simple things. A list of names, document titles, department organization, expense reports—nothing is too small. It’s all an investigation, a puzzle, and you’re taking slices of information to determine which documents or individuals are likely to be impacted.
Next, Leverage Your Tech Toolbox in a New Way
When search time does come around and you’re ready to get your hands dirty, our panelists advised that a mixture of tools often serves best—but you may want to rethink how you’re using them.
“We want to find the smallest set of data to analyze, so we’re using traditional e-discovery tools, like structured analytics, but we don’t care about the family relationships,” Kelly explained. “We’re being strategic in how we can start to save time and money.”
She went on to offer a few practical examples:
Textual near-duplicate identification
Nudge the minimum similarity percentage to a lower percent, especially on large spreadsheet or reports. But, a word of warning from Kelly: be sure to back up that decision with sampling, whether judgmental or random.
“Sampling will help you determine if you’re at risk of leaving behind data subjects or if you’re looking at duplicative data subjects,” Kelly explained.
Language identification
Run language identification right after processing for early intel into where data subjects may be located, and therefore, what the legal requirements might be. You can also use language ID to help you determine your review needs, like if you’ll need to source foreign-language reviewers or use machine language translation.
Test, Sample, Validate, Repeat
Once you’ve used your various tools to cull down to a unique data set, it’s time to find out which data is most likely to contain the personal information. Finding the answers comes down to the sampling, validating, and documenting done at every decision point.
Why sample and validate? Kelly offered a few reasons.
First, sampling allows you to better understand the prevalence of PII in your data set. With that, you can make projections about what you can expect to find in the data set and use validation to gut check that you’re falling in the expected range. Kelly explained that her team also uses sampling to understand the density and content of documents.
“Are there large documents that could slow us down that we should prioritize? Can we find documents with individuals, names, and associated personal information? These are the considerations we calibrate search terms across.”
Anya Korloyov, vice president of cyber incident response at HaystackID, elucidates that those same tools used to identify these pockets of PII will become the very tools we’ll use to structure the review.
“Eventually you’re going to have to lay your eyes on documents. To get there, we structure the review the best way possible using all this technology. [For example,] we group textual near duplicates together. We push the most dense documents to the review team first, so that at the end of the project we don’t hit a huge pocket of data,” she explained.
Finally, Set Your Client Up for Future Success
As a parting thought, the panelists were asked: “How do you actually measure success in a case like this?”
For Scott, success has two parts: 1) meeting the deadline; and 2) being able to give the client actionable intelligence about what happened.
“After the chaos subsides, we build them heat maps of PII. It surprises me every time how little people know about their data,” Scott said. “[The intel we give them] often helps clients get a budget for a pesky information governance project that was sitting on the desk that no one wanted to pay for.”
For Susana Medeiros—information governance, privacy, and cybersecurity associate at Norton Rose Fullbright—success is the thesis of what the panelists have been sharing all along: having a documented, defensible process.
“In the cyber incident response world, we’re getting more and more pressure to do the job better and more cost effectively. Being able to say, ‘I didn’t review everything, but that doesn’t mean that I didn’t reasonably identify impacted indviduals’ [is a big success].”
Maybe these insights won’t ease the data breach nightmares we’re all enduring, but they are expert-endorsed methods for handling the aftermath when it inevitably comes. And when breach incidents errupt, having a game plan is everything; what’s yours?
Celia O’Brien is a member of the marketing team at Relativity where she serves as a copywriter.