Executive Summary
- What’s new: The UK ICO’s initial analysis on the regulatory challenges of agentic AI emphasizes that these systems must comply with the existing UK GDPR framework.
- Why it matters: The report is relevant to developers and deployers of agentic AI systems, highlighting the need to address UK GDPR requirements including automated decision-making, data minimization, accuracy, transparency, special category data and security.
- What to do next: Companies developing or deploying agentic AI systems should consider documenting their UK GDPR compliance efforts and benchmarking against the ICO’s report.
__________
Recent technological advances have enabled the emergence of agentic AI — systems that can automate a range of tasks, potentially saving time and money for organizations. In its 8 January 2026 Tech Futures report on agentic AI, the UK Information Commissioner’s Office (ICO) set out its initial analysis of the regulatory challenges presented by agentic AI and provided insights for developers and deployers of these systems looking to comply with the UK General Data Protection Regulation (UK GDPR).
The report emphasizes that these innovations must fit into the existing UK GDPR framework.
Below, we summarize the report’s main points.
1. Automated Decision-Making
Agentic AI systems are intended to take actions on behalf of users. When those actions have a significant effect on individuals, the UK GDPR’s rules on automated decision-making apply. For example, users will need to be informed about the existence of automated decision-making, consent may be required and safeguards (including human intervention) must be in place.
Developers and deployers of agentic AI systems will need to carefully assess whether the actions taken by their systems constitute automated decision-making under the UK GDPR and, if so, how safeguards such as human oversight can be implemented.
2. Purpose Limitation and Data Minimization
The UK GDPR requires that companies collect and use data only for specific purposes and minimize the amount of data they collect. Agentic AI developers and deployers need to carefully define the purposes of their systems and ensure that they only process the information necessary to achieve those purposes. In practice, this means that they should:
- Map the data sources (e.g., databases, websites) that the agentic AI system can access.
- Identify which data is accessed from each source.
- Document why data collected is necessary for the system’s purposes.
3. Accuracy
The UK GDPR requires data to be accurate. Like other generative AI systems, agentic AI may produce inaccurate outputs (known as hallucinations). The ICO’s paper says that this is a “key issue for organizations using agentic systems” but indicates that “how [accuracy] will be achieved is unclear.”
Given this lack of clarity and the inevitability of hallucinations, organizations should consider:
- Documenting the processes they have in place to improve the accuracy of AI systems.
- Recording the steps taken to identify and fix inaccuracies.
4. Transparency
The UK GDPR requires that individuals are informed about how their data is processed. Agentic AI systems will process large amounts of data from a wide range of sources, making it challenging for companies to explain how those systems process data. Additionally, agentic AI may be used in ways that are not immediately obvious to the individuals whose data is involved.
For example, a human resources department might use an agentic AI system to extract and screen curriculum vitae from a recruitment database, but applicants may not be aware that their data is being processed in this way.
The ICO makes clear that despite these challenges, the UK GDPR’s transparency obligations remain unchanged. Companies implementing agentic AI should consider providing individuals whose data is processed with clear explanations covering, among other information:
- What personal data is being processed and why.
- From where the agentic system collects that personal data.
- What rights and controls (e.g., in relation to automated decision-making) individuals have.
5. Special Category Data
The UK GDPR imposes restrictions on the use of “special category” data such as health data, including requirements to (in most cases) obtain consent before processing. This presents a challenge for agentic AI systems, which may inadvertently encounter special category data while performing their tasks.
For example, an AI system that helps schedule appointments could unintentionally access information about medical appointments. The ICO says that this is “an established issue” for AI systems but does not give any clear answers on how agentic AI systems should be designed to address the UK GDPR’s consent requirements in this scenario.
6. Security
The UK GDPR requires that companies have appropriate data security measures in place but, as with other AI systems, agentic AI systems may introduce new cybersecurity vulnerabilities. The ICO suggests that organizations assess security risks posed by agentic AI, including by building on international standards and frameworks such as OWASP’s GenAI Security Project.
What to Do Now
The ICO’s report acknowledges that the state of the art in agentic AI is evolving rapidly. As such, the application of UK GDPR principles to agentic AI will be unclear for some time.
Given this uncertainty, companies developing or deploying agentic AI systems should consider documenting their UK GDPR compliance efforts and benchmarking that documentation against the ICO’s report, while acknowledging that there is no “right answer” to UK GDPR compliance issues.
This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

For further information, please contact:
Nicola Kerr-Shaw, Partner, Skadden
nicola.kerr-shaw@skadden.com




