Regulating with AI While Regulating AI: The Dual Role of Capital Markets Authorities

02/02/2026 Finance and investment | Ahmed Saeed Alnaqbi

 12     0

Artificial intelligence is increasingly shaping capital markets, from algorithmic trading and robo-advisory services to automated compliance and risk management systems. As this transformation accelerates, capital markets authorities now occupy a distinct dual role: they are beginning to use AI to enhance regulatory supervision, while simultaneously regulating the use of AI by market participants.

This dual responsibility is not merely an opportunity; it is becoming unavoidable. Markets now move at a speed and level of complexity that periodic, rules-based supervision struggles to keep pace with. At the same time, the rapid deployment of AI by regulated firms places new demands on supervisors to understand how automated decisions are generated, governed, and controlled.

In a regulatory context, agentic AI refers to AI systems capable of independently executing predefined supervisory tasks, such as monitoring, prioritisation, and alert generation-within strict regulatory rules and continuous human oversight. These systems do not replace supervisory judgment; rather, they reshape how that judgment is exercised.

For example, AI-enabled supervisory agents could continuously analyse trading activity and corporate disclosures, flagging anomalous patterns or emerging risks for human review. In practice, this shifts supervisory attention away from broad, manual screening toward more focused assessment of higher-risk cases. Importantly, final supervisory or enforcement decisions must remain entirely human-led.

From a supervisory perspective, the most difficult questions are not technical ones, but governance questions: when to trust AI output, when to override it, and how to evidence that judgment after the fact. These questions become more pressing as AI systems move from passive analytics toward more autonomous behaviour.

Regulating with AI therefore requires more than technology adoption. It requires robust internal governance, clarity of accountability, and institutional understanding of AI limitations. Automated outputs should be treated as inputs to regulatory judgment, not determinations in themselves. In practice, the greater risk is not that regulators adopt AI too slowly, but that they adopt it without sufficient internal challenge, testing, and supervisory scepticism.

Supervisory authorities also face practical constraints such as legacy systems, fragmented data, and skills gaps, that shape how quickly and safely advanced AI can be deployed. These realities matter, and they should temper expectations about what AI can realistically deliver in the short term.

At the same time, capital markets authorities must continue to regulate the growing use of AI within the market. As AI systems used by firms become more autonomous and adaptive, supervisory focus must extend beyond outcomes alone to include model governance, data integrity, accountability, and operational resilience. Where AI influences decisions, responsibility must remain clearly assigned to identifiable individuals or governing bodies.

The interaction between using AI internally and regulating it externally provides regulators with a practical advantage. Internal use of AI changes how supervisory teams ask questions, challenge assumptions, and interpret signals during real supervisory work. This experience supports more proportionate and credible regulation, grounded in operational reality rather than abstraction.

Regardless of the level of AI autonomy involved, accountability for supervisory outcomes must always rest with the regulatory authority and its designated decision-makers. Human oversight remains essential, not as a formality, but as a safeguard for due process, proportionality, and public trust.

As international regulatory forums increasingly examine the role of AI in financial markets, shared principles around transparency, governance, and human-in-the-loop oversight will be critical to maintaining cross-border confidence and supervisory cooperation.

Ultimately, regulating with AI while regulating AI is not a contradiction. It reflects the evolving responsibilities of modern regulators operating in increasingly digital markets, where credibility depends not on adopting new tools quickly, but on governing them carefully.

 

Ahmed Saeed Alnaqbi

Senior Financial Analyst

Your comment

There are no comments yet.