There is a new frontier AI model called Claude Mythos. You probably have not heard of it. Anthropic has not released it publicly yet. Instead, they launched it on 7 April 2026 to a restricted consortium of organisations under an initiative called Project Glasswing, explicitly acknowledging that the model’s capabilities are too significant for general availability.
Mythos is Anthropic’s most capable model to date. They describe it as a step change over Claude Opus 4.6, not an incremental upgrade. It is a general‑purpose model, but its cybersecurity capabilities in particular are dramatically more advanced than anything previously released.
Claude Mythos: What Changed
One of the most striking disclosures came from the publicly available Claude Mythos Preview paper on what it was able to achieve. The model identified thousands of previously unknown vulnerabilities across every major operating system and every major web browser, including:
- discovery of a 27 year old vulnerability in an operating system widely used in firewalls and critical infrastructure.
- identification of a 16 year old flaw in widely used video processing software that had passed millions of automated test runs; and
- a 17‑year‑old vulnerability that allows an unauthenticated attacker anywhere on the internet to gain root access to a server.
Traditional AI governance has focused on:
- policies and ethical principles;
- privacy, fairness and bias controls;
- documentation and model risk assessments; and
- model selection and vendor due diligence.
These elements remain necessary. However, they are no longer sufficient for this new era.
- decision rights, who authorises autonomy;
- action authority, what the system is allowed to do;
- boundaries and enforcement, what is prevented in practice; and
- evidence and auditability, what can be reconstructed and defended
Most importantly, this shift requires a change in leadership mindset. AI governance must move from policy driven oversight to operational control of autonomous decision making.
Impact on Business Functions
A) Cybersecurity Teams
Cyber governance must assume that AI becomes both:
- an accelerator for defensive capabilities; and
- if misused or poorly governed, a threat actor in its own right.
Key areas to focus on:
- Containment and trust boundaries: Implement hard technical boundaries such as segmentation, controlled tool access and restricted external connectivity.
- Autonomy controls: Clearly define and enforce what AI systems can do, for example read only, recommend or execute with approval. Any allowance for autonomous execution must align with business risk appetite and AI policy.
- Threat model redesign: Update threat models to account for automation that operates faster than traditional human review cycles.
- Auditability at speed: Ensure robust logging and periodic review to support investigations, outcomes analysis and impact assessment.
- Third party AI exposure: Vendor assurance must include autonomy levels, tool access, change control, incident disclosure and audit rights.
B) Legal Teams
Key areas to focus on:
- Attribution and accountability: Clarify who is responsible when AI systems act unexpectedly and who authorised autonomy on what lawful basis.
- Defensible governance position: Be able to demonstrate why authority was delegated, what controls were in place and why oversight was proportionate.
- Contracting and procurement: Update contracts to address autonomy thresholds, audit rights, model or tool changes, notification requirements, incident reporting and liability allocation.
- Regulatory readiness: Maintain a proactive dialogue with regulators, supported by evidence that decisions, controls and escalation mechanisms can be reconstructed when required.
- Legal Supply chain – Many legal and compliance tools are built on, or integrated with, Claude models. When Anthropic upgrades its underlying systems, those tools change as well. A platform that behaved predictably on a previous model may behave differently after an upgrade. This raises the bar on how those outputs are reviewed, validated and relied upon by your teams. The questions to put to your vendors are : which models are you using, how often do they change, what testing is done before upgrades and what notice do you receive?
C) Data, Privacy and AI Governance Teams
Data governance expands from lifecycle management to decision lineage and governance of AI generated artefacts.
Key areas to focus on:
- Data traceability: Ensure clarity on how training data and inputs relate to outputs and decision outcomes.
- Govern AI generated artefacts: Treat reasoning chains, inferences, rankings and prioritisation logic as governed records with defined ownership, retention, use of synthetic data and auditability.
D) Enterprise Risk and Operational Risk Teams
Risk governance must shift from periodic assessments to real time oversight, with autonomy treated as a first class risk dimension.
Key areas to focus on:
- Autonomy as a risk dimension: Classify AI systems based on autonomy levels aligned with organisational risk appetite.
- Cross functional accountability: Require joint risk ownership and sign off across senior management and relevant business functions rather than siloed assessments.
- Board visibility: Maintain a clear view of where AI is making decisions, which actions are prohibited, how controls are enforced and how risk aligns with board strategy.
The Board Level Questions That Matter
Boards need more clarity on control and accountability as AI evolves and impact the wider business stakeholders. Key questions include:
- Where is AI making decisions today, explicitly or implicitly?
- What actions can AI take without human approval and how is this enforced?
- What are the hard boundaries around data, tools and systems, and how are breaches monitored?
- If something goes wrong, can we reconstruct what happened and why?
- Who is accountable for AI driven actions, business owner, IT or vendor?
- How do third party AI systems change our risk exposure and obligations?
- What is the stop or escalation model and can it operate at machine speed?
A Practical Way Forward
- AI Policy and Risk Management Framework: Ensure that existing AI policies and risk management frameworks are reviewed and updated to reflect advances in AI capabilities. Conduct targeted risk assessments to clearly identify how these developments impact the business, and update documentation accordingly. Effective change management and training are essential to operationalising updated policies and frameworks, and to setting the right tone for AI governance across the organisation.
- Autonomy Inventory: Maintain an up to date inventory of AI use cases, showing where autonomy exists, what systems and data are accessible and how independent each system is.
- Decision Rights and Ownership: Clearly define who approves, manages, changes and responds to each AI system to avoid ambiguity during incidents.
- Enforced Boundaries: Establish real, technical limits on what AI can do rather than relying on written rules alone. Require human approval for sensitive actions.
- Audit governance for agentic workflows: Identify what actions AI systems can take, who authorises them, and how those actions are monitored and reviewed.
- Challenge your tech vendors: Which models are you using? How often do they change? What testing is done before upgrades? What notice do you receive?
- Evidence and Auditability: Keep clear records of actions, decisions and system changes to enable defensible explanations to regulators, auditors and boards.
- Brief your leadership: The Mythos story is exactly the kind of development that GCs, CCOs and board‑level risk committees should hear about from you, with context, not from the media without it.
The most important lesson from developments such as Claude Mythos is that it is becoming more operational. An effective AI governance programme must evolve in step with technological advancement, enabling innovation while preventing harm, disruption or unintended consequences.
When systems can act autonomously, governance must evolve beyond policies and committees toward institutional controls that operate at speed, with clear decision rights, enforceable boundaries, auditable evidence and accountable ownership.

For further information, please contact:
Richard Chudzynski, Partner, Konexo
richardchudzynski@konexoglobal.com




