Data Governance Specialist Strongly Advises Legal And Compliance Teams To Lead The Charge Towards Responsible AI Readiness.
Jon Kessler, Vice President and General Manager of Information Governance at Epiq, advises legal and compliance teams to lead the charge towards responsible AI readiness. Organizations have seen a surge in interest surrounding AI tools that boost broader organizational productivity, improve accuracy, and unlock new ways of working. But as organizations race to adopt these technologies, one truth has become clear to Jon: “without a clearly defined data security strategy, the risks of overexposure of sensitive data is high.“
Prior to the widespread adoption of AI, data storage and access methods differed across companies and even functions. Through his role, Jon commonly saw users and even leadership often having limited understanding of what sensitive information was accessible and by whom, relying on a lack of awareness by larger employee populations to keep sensitive documents safe. Now, AI tools can access and surface any data a user has permission to view. This can inadvertently reveal sensitive content (e.g., business strategy documents, privileged materials, or personally identifiable information (PII) that was never meant to be widely shared.
Historically, access controls were designed for manual workflows. A user might have access to thousands of files, but without using a search prompt or direct navigation, they’d never encounter most of them. AI changes that. It synthesizes and surfaces information from across the data estate, producing new content that puts sensitive details at risk. This shift demands an intentional and cautious approach to data governance, but Jon sees many companies struggle to manage this, with loose approaches to data security, leading to unintended exposure of sensitive data, cybersecurity risks and a large attack surface.
Jon Kessler advises:
“Legal teams must take a hard look across their entire data landscape. This includes knowledge of where data is stored, awareness of access permissions, and the type of information that your organization qualifies as ‘sensitive.’ Identifying criteria for sensitive information is a critical first step in limiting access permissions without leaving data vulnerable.
“This discovery phase is the foundation for AI implementation. Not just regarding compliance with privacy laws but also understanding what data must be retained for regulatory reasons, what should be defensibly deleted, and how strictly user permissions should be altered.
“Any degree of content sharing can introduce risks, such as unintended exposure of sensitive data and AI prompts and responses not working as intended. Sharing too much information can hinder implementation efforts and increase the likelihood of errors related to broad data access or prompt management. On the other hand, over-restrictive sharing schemas will grind collaboration to a halt.
“To allow for collaboration while also mitigating the inherent associated risks, organizations should implement systematic procedures to address data overexposure, continuously monitor AI interactions, audit outputs, and improve access controls. Utilizing a “least permissive” access model, where users only have access to information relevant to their responsibilities, is highly encouraged.”