In AI-driven investment services, and across financial and insurance services more broadly, the question of who governs the “machine” is the central governance problem that existing regulatory frameworks have not yet fully resolved.
This article addresses that problem through the regulatory case of investment advice and portfolio management, not because that case is unique, but because it exemplifies two structurally connected mismatches. The first is the Governance Gap. This arises because DORA sets out a broad technology governance framework, while the AI Act only imposes binding AI governance requirements on a defined category of high-risk use cases, leaving a significant middle ground ungoverned. The second is the Accountability Gap. EU financial services law places ultimate, non-delegable accountability on the management body. In practice, however, the people who actually control how AI systems behave sit in IT and operations functions, not at board level. This tension arises wherever an AI system is the primary mechanism through which a regulated service is delivered.[1]
Both mismatches emerge at the intersection of three simultaneously applicable instruments: MiFID II and Commission Delegated Regulation (EU) 2017/565, establishing the accountability and conduct framework; DORA, governing ICT risk without supplying AI-specific governance tools; and the AI Act, restricting its binding governance requirements to a high-risk perimeter that leaves the most operationally significant AI deployments in investment services unaddressed. It is in the space between these three instruments, where DORA’s governance obligations reach, but AI Act binding standards do not, that both gaps open.
This article proposes a governance architecture that bridges both gaps, providing organisational and operational solutions, grounded in both structural design and institutional culture, of general application across the financial sector. Section I identifies the Governance Gap and establishes an internal governance standard on the legal basis of DORA. Section II examines the Accountability Gap and identifies its structural irreducibility. Section III proposes the three-function control architecture and the oversight mechanisms through which it is implemented.
Section I, The Governance Gap and the Internal Governance Standard
1.1 The Structural Misalignment
AI systems deployed by financial institutions are ICT assets within the meaning of DORA and are therefore subject to its full governance and risk management requirements irrespective of their AI Act classification. DORA’s ICT risk management framework is built on the technology-neutral principle: it does not impose AI-specific governance requirements, and the management body’s ultimate and non-delegable responsibility under Article 5(2)(a) operates, accordingly, without AI-specific operational content.
The AI Act supplies that content, autonomy constraints, human oversight, controllability, foreseeable risk, but restricts binding obligations to a high-risk perimeter that excludes the most operationally significant AI deployments in investment services. Robo-advisory, portfolio management, and corporate credit scoring are each likely to be critical to a financial institution’s business model. As ICT assets, they fall squarely within DORA’s governance framework. Yet none of them falls within the AI Act’s designated high-risk financial use cases. The result is a governance vacuum: while DORA’s obligations extend to this area, the AI Act’s binding standards do not.
1.2 Closing the Gap: The Internal Governance Standard
The legal basis for closing the gap is already present in DORA. Article 5(2) requires the management body to define, approve, oversee, and be responsible for all arrangements related to the ICT risk management framework, which directly grounds the formal adoption of an internal AI governance standard as a governance act of the management body. Article 6(5) requires continuous improvement of that framework having regard to relevant standards and good practices.
The AI Act provider obligations (Articles 9–24 and 72–73) offer the most detailed and comprehensive lifecycle framework available, making them a natural backbone for any internal AI governance standard. Instruments such as the NIST AI RMF, ISO/IEC 42001, and the OECD AI Principles can then be used to fill in the detail when implementing specific measures.
Article 9(10) permits the AI Act risk management system to form part of the DORA ICT risk management procedures, enabling integration rather than duplication. The formal adoption of the AI Act provider requirements as the institution’s internal AI governance standard is a governance decision falling directly within DORA Articles 5(2) and 6(5).
Once that decision is taken, it translates directly into practical obligations through the quality management system under Article 17, organised across three dimensions: upstream control (risk management, data governance, design, testing, and validation); operational monitoring (human oversight, post-market monitoring, incident reporting, and corrective action); and documentary record (technical documentation and record-keeping). Governance intensity is calibrated proportionately to each system’s assessed criticality, linking business impact assessment, DORA ICT asset criticality, and incident classification thresholds within a single integrated framework.
Section II, The Accountability Gap: Formal Responsibility and Operative Control
2.1 Why AI Risk Makes the Gap Structurally Irreducible
The governance consequences of deploying AI systems in investment services cannot be assessed through the lens of traditional ICT risk management alone. Opacity, the inability to fully understand the internal logic of machine-learning systems, even for their developers, reframes what might appear to be a disclosure issue as a systemic governance constraint, compelling firms to design and monitor AI tools in a manner that preserves meaningful institutional oversight. Opacity compounds algorithmic bias: discriminatory outcomes originating in training data or model design remain invisible to those responsible for the service until they manifest in complaints, supervisory findings, or litigation. Automation bias (the tendency to place increasing reliance on machine-generated outputs) gradually erodes the human oversight that the regulatory framework formally requires. The AI Act recognises this not as a failure of individual judgement, but as an inherent feature of how humans interact with automated systems over time. Model drift, the silent divergence of a model’s operative behaviour from its validated state due to data distribution shifts or market changes, translates a gradual degradation in output quality, with no visible failure signals, into legal and reputational exposure before any corrective intervention is triggered. Together, these features define an environment in which the traditional assumption of ICT governance, that technology performs a supporting and non-decisional function, no longer holds. An AI system that shapes the information on which an adviser bases a recommendation is not merely a support tool. It is, in a meaningful sense, driving the judgement itself. That is why governing the AI system’s behaviour cannot be separated from governing the service it delivers.
2.2 The Structural Disruption of Accountability
MiFID II, DORA, and the AI Act establish an accountability structure that is both layered and non-delegable. The management body bears ultimate accountability for the firm’s governance, digital operational resilience, and the investment services it delivers. The duty to act in clients’ best interests under MiFID II Article 24(1) rests with the investment firm irrespective of whether services are performed by a person or an AI system.
That accountability hierarchy assumes that those who are formally responsible also have effective control over what happens in practice. In AI-driven investment services, that correspondence is structurally disrupted: the management body bears non-delegable accountability for the investment service delivered through the AI systems the firm deploys, yet the features that determine what those systems do are substantially shaped by IT and operations functions whose professional domain is the operative architecture of those systems, not the regulatory framework governing the services they support. Accountability is formally concentrated at the top of the institutional hierarchy, but effective control over the processes that determine service outcomes sits largely elsewhere, in IT and operations.
This structural tension cannot be resolved by reallocating regulatory responsibility to technical actors. ESMA confirmed in its Public Statement of 30 May 2024 that investment firms’ decisions remain the responsibility of management bodies irrespective of whether those decisions are taken by people or AI-based tools. Article 54(1) of Delegated Regulation (EU) 2017/565 operates as a non-derogation clause: responsibility for the suitability assessment is not reduced by the use of an electronic system. Neither outsourcing nor the contractual structure of an AI service relationship transfers regulatory responsibility away from the firm. Responsibility does not follow the service contract.
The Accountability Gap is, at its institutional core, a governance design problem. MiFID II, DORA, and the AI Act all assign non-delegable accountability to the management body. What they do not do is explain how that accountability is to be discharged in practice. Specifically, how a firm should build an organisational control structure capable of exercising effective, proportionate, and traceable oversight of AI systems across all of its functions. Closing that gap requires establishing, from within the institution itself, an integrated governance architecture that accompanies AI systems considered critical to the provision of investment services from design to output delivery, making the management body’s legal accountability real rather than nominal.
Section III, The Three-Function Control Architecture
3.1 The Three Functions
Closing both gaps requires an architecture that embeds governance authority over AI system behaviour in the institution’s organisational design. The architecture comprises three functions, independent in their control remit and complementary in their operational arrangement, each occupying a defined position within the three-lines-of-defence structure.
Post-Market Monitoring (First-Level Technical Control): Operated by the Operations function, hierarchically separated from both the business lines and the Technology function where AI models are designed, this function is responsible for continuous monitoring of model performance, detection of drift and anomalous behaviour, and escalation where performance diverges materially from validated parameters. Its legal grounding lies in DORA’s anomaly detection requirements and the AI Act’s post-market monitoring standard under Article 72.
Human Oversight (First-Level Operational Control): Operated by the business function control team, this function assesses AI-generated outputs against the applicable suitability framework before those outputs are acted upon or communicated, and holds unconditional authority to disregard, modify, reject, or suspend those outputs within the policy approved by the management body. Its legal grounding lies in AI Act Article 14 and the MiFID II suitability framework.
ICT Risk Management (Second-Level Control): Operating with appropriate independence under DORA Article 6(4), this function independently evaluates and challenges the adequacy of both first-line functions, defines risk assessment methodologies, and recommends corrective actions to the competent management authority.
3.2 The Architecture as an Integrated System
The three functions form a circular arrangement: Post-Market Monitoring monitors the model; ICT Risk Management independently challenges the adequacy of that monitoring; Human Oversight governs the output and assumes final client-facing accountability. Across the AI lifecycle, any significant change to model architecture, training data, or deployment scope requires the prior involvement of all three functions; materially conflicting conclusions are escalated to the competent management authority. Across output monitoring, the three functions form a continuous feedback circuit for sharing performance signals, override logs, and independent challenge recommendations. Structural design alone is not enough. The circular arrangement only operates as a genuine system of control where the institution actively fosters a culture of communication and coordination across the three functions, supported by documented protocols, clear escalation procedures, and regular reviews of how well those functions are working together.
3.3 Three Oversight Mechanism Sets
The three functions exercise control through three sets of mechanisms, graduated by the level at which they intervene in the AI output cycle.
Set 1, Ex-Ante Guardrails: Before any output is generated, the business function defines the constraints, on risk appetite, product complexity, target market, and suitability parameters, within which the AI system operates (MiFID II Article 25; AI Act Article 9). These guardrails are the structural precondition for any oversight exercise that follows.
Set 2, Output-Level Oversight: In decision-support contexts, the business function control team’s authority to disregard, modify, or reject AI-generated outputs is unconditional and determined solely by the internal policy approved by the management body. The control team instructs client-facing advisers to adopt, override, or modify AI-generated recommendations through documented governance acts and communicates AI output logic to clients in accessible terms where required. The override log feeds back into the continuous monitoring circuit.
Set 3, Circuit-Breaker Mechanism: In automated contexts, the circuit-breaker, activatable by the business function control team on signals from Post-Market Monitoring, enables real-time suspension of the AI system’s executable capacity whilst maintaining the commercial channel in advisory or read-only mode (AI Act Article 14(4)(e); DORA Articles 24–27). Activation engages ICT Risk Management’s independent review function, which assesses whether the interruption meets the criteria for DORA major incident classification and escalates its assessment to the competent management authority for determination of applicable reporting obligations.
Conclusions
The purpose of this article was to demonstrate that the Governance Gap and the Accountability Gap are structural legal and organisational challenges that any financial institution deploying AI systems in the delivery of regulated investment services must address by reconsidering its governance infrastructure. The three-function control architecture is the institutional response proposed: designed to align formal governance authority with operative control over AI system behaviour, it provides the organisational and operational framework through which those gaps can be closed.
Effective governance of AI systems is not, at its core, purely a matter of technical compliance architecture. It is equally a question of institutional culture — specifically, whether the firm is organised in a way that enables meaningful collaboration between the departments that hold formal authority, operational control, and independent oversight. What is at stake is institutional control: the capacity to determine, from within the firm’s governance structure, what objectives the AI system pursues, whose interests it serves, and what risks it generates. It is through the integration of the three functions alone that the architecture operates as a real system of control rather than a formal allocation of roles.
Where institutional culture does not support that integration, the gap between formal responsibility and real power widens: the management body is held accountable for outcomes it lacks the organisational capacity to control. The governance framework set out in this article is designed to prevent that outcome.

For further information, please contact:
Giuseppe D’Agostino, Bird & Bird
giuseppe.dagostino@twobirds.com
[1] An AI system constitutes the cognitive infrastructure of a regulated service where it structurally shapes the decisional landscape within which the firm operates — by directly generating a recommendation or by configuring the parameters, weightings and constraints within which that recommendation is produced — as distinct from systems that support operators on demand or process data for reporting purposes.




