As artificial intelligence (‘AI’) continues to evolve at an unprecedented pace, a new class of systems is beginning to reshape how organisations think about automation, decision‑making, and digital transformation.
On 22 January 2026, Singapore introduced the Model AI Governance Framework for Agentic AI (‘MGF’) at the World Economic Forum in Davos, Switzerland; marking one of the first comprehensive policy responses worldwide to this emerging technology.
What is Agentic AI?
Agentic AI systems are self-managing AI systems that can take actions, adapt to new information, and interact with other AI agents and systems to complete tasks on behalf of humans. Unlike Generative AI and Traditional AI, Agentic AI typically possess some degree of independent planning and action taking.[1]
These AI agents are already rapidly transforming the workplace through coding assistants, customer service AI agents, and automating enterprise productivity workflows. This allows organisations to automate repetitive tasks and drive sectoral transformation by freeing up employees’ time to undertake higher value activities.[2]
Risks of Agentic AI systems
While the emergence of these Agentic AI systems reflects ongoing developments in AI that brings about new capabilities and opportunities for stakeholders, organisations, and users, it also presents unique risks and challenges.
For instance, Agentic AI systems may have access to sensitive data and have the ability to make changes to their environment, such as updating a customer database or making a payment.[3] Their use also introduces potential new risks, such as unauthorised or erroneous actions.
What does the MGF address?
As the first authoritative resource addressing the specific risk of Agentic AI in Singapore, the MGF fills a critical gap in policy guidance. The MGF offers a structured overview of the key risks in deploying Agentic AI systems, as well as some best practices. The MGF is applicable to all organisations looking to deploy agentic AI, regardless of whether it is developing AI agents in-house, or using third-party agentic AI solutions.[4]
The MGF outlines the technical and non‑technical measures that organisations must implement to ensure responsible AI agent deployment across four dimensions:
| 1. Assess and bound the risks upfront | 2. Make humans meaningfully accountable |
| 3. Implement technical controls and processes | 4. Enable end-user responsibility |
Key principles of the MGF
- Assess and bound the risks upfront
When planning for the use of Agentic AI, organisations should:[5]
- Know the risks by determining the suitable use cases for AI agent deployment by carrying out a risk assessment. Organisations should also consider agentic-specific factors that can affect the likelihood and impact of the risk, such as the AI agents’ access to sensitive data and level of autonomy.
- Further bound the risks by defining appropriate limits and permission policies for each AI agent. This includes setting limits on AI agent’s autonomy, tools, and systems. For example, organisations should define SOPs for agentic AI workflows that an AI agent is constrained to follow, rather than giving the AI agent the freedom to define every step of the workflow.
- Make humans meaningfully accountable
As deployers, organisations and humans should remain accountable for the decisions and actions of AI agents. To remain accountable, organisations should consider:[6]
- Having clear allocation of responsibilities within and outside the organisation, and vis-à-vis other organisations along the AI agent value chain
- Designing effective human oversight such as requiring human approval at significant checkpoints or before sensitive actions are executed
- Consider what form approvals should take, such as keeping approval requests contextual and digestible.
- Implement technical controls and processes
Additional controls are necessary at different key stages of the implementation lifecycle of an Agentic AI system. These key stages and the considerations to note are:[7]
- During design and development, design and implement technical controls to mitigate identified risks. Further, the AI agent’s impact on the external environment should be limited by enforcing least-privilege access to tools and data.
- Pre-deployment, test AI agents for safety and security. Organisations should test AI agents for new dimensions such as overall task execution, policy adherence and tool use accuracy.
- When deploying, gradually roll out agents and continuously monitor them in production. As AI agents are adaptive and autonomous, organisations should consider mechanisms to respond to unexpected or emergent risks when deploying AI agents. This should be supported by real-time monitoring post-deployment to ensure that AI agents function safely.
- Enable end-user responsibility
As end-users are ultimately the ones who use and rely on AI agents, human accountability also extends to these users. Organisations should cater to different users with different information needs, to enable such users to use AI responsibly. Organisations should consider the following principles:
- Transparency. Organisations should inform users of the AI agents’ capabilities and the contact points whom users can escalate to if the AI agent malfunctions. Some use cases of users interacting with AI agents include customer service and HR agents.[8]
- User-education. Organisations should educate users on the proper use and oversight of AI agents. Sufficient training should be provided to ensure that users retain foundational skills. Some use cases of users who integrate AI agents into their work processes includes coding assistants and enterprise workflows.[9]
What’s next?
Agentic AI systems are able to bring about positive changes to Singapore’s digital economy. However, organisations deploying or considering Agentic AI should review the MGF thoroughly to understand the risks and to implement the aforementioned measures.
IMDA currently welcomes feedback from interested parties to refine the framework, as well as submission of case studies that demonstrate how Agentic AI can be responsibly deployed.[10]
Should you have any questions on any of the points discussed above or would like advice on Agentic AI systems in Singapore, please do not hesitate to get in touch with any of our legal experts listed below.
[1] MGF at pg 1.
[2] https://www.sgpc.gov.sg/detail?url=/media_releases/imda/press_release/P-20260122-2&page=/detail&HomePage=home
[3] https://www.sgpc.gov.sg/detail?url=/media_releases/imda/press_release/P-20260122-2&page=/detail&HomePage=home
[4] MGF at pg 1.
[5] MGF at pg 9.
[6] MGF, pg 13 to 16.
[7] MGF, pg 18
[8] MGF, pg 22
[9] MGF, pf 22
[10] https://www.sgpc.gov.sg/detail?url=/media_releases/imda/press_release/P-20260122-2&page=/detail&HomePage=home



