Skip to main content Skip to navigation

Human-centric and outcome-focused approaches to AI governance in financial services regulation

Human-centric and outcome-focused approaches to AI development and governance in financial services

Project Team: Yulu Pi, Prof Cagatay Turkay Link opens in a new windowUniversity of Warwick

Daniel Bogiatzis-Gibbons, Financial Conduct Authority

Funded by ESRC IAA Policy Support Fund

A photo showing post-its on a piece of paper used during a workshop

The project addresses the “pacing problem” in AI governance, the growing gap between the rapid evolution of AI technologies and the slower development of regulatory frameworks. This mismatch creates governance gaps, weakens oversight, and erodes public trust. The Financial Conduct Authority (FCA), our project partner, faces this challenge as it transitions toward principle- and outcome-based regulation, moving away from rigid, detailed rules toward regulation that prioritize consumer outcomes. While the FCA has traditionally used online experiments and surveys to inform policy for conventional financial products, emerging AI-driven services (such as those powered by large language models) require new methods to evaluate complex human–AI interactions effectively.

Progress has been made toward the project’s objectives through two sequential workshops examining the governance challenges of Interactive AI (IAI) systems and the role of empirical research in understanding their societal effects.

The first workshop convened participants from academia, regulatory bodies, the private sector, and civil society, with expertise spanning behavioral science, computer science, economics, law, public policy, community engagement, and AI ethics. Its three core aims were to:

  • Identify key governance challenges arising from the relational and adaptive nature of IAI systems;
  • Develop new methods for behavioral insights, addressing the limitations of traditional approaches for studying human–AI interactions; and
  • Translate behavioral insights into policy, exploring how evidence can effectively inform regulatory decision-making.

This workshop successfully established a shared dialogue between researchers, policymakers, and civil society, result in a paper presented at the Artificial Intelligence, Ethics, and Society (AIES) conference (see reference below).

Building on the first workshop, the second workshop focused on collaboratively developing a methodological toolkit to guide future empirical research on human–AI interactions. Following the workshop, preliminary versions of the toolkit and a summary of discussions were circulated for participant feedback and validation. This work remains ongoing

References:

Pi, Y., Turkay, C. and Bogiatzis-Gibbons, D., 2025, October. Interactive AI and Human Behavior: Challenges and Pathways for AI Governance. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 8, No. 3, pp. 2016-2029).

Link to the paperLink opens in a new window

Let us know you agree to cookies