Skip to main content Skip to navigation

Driving AI with the first quantifiable safety framework

Tuesday 7th October 2025

Driving AI with the first quantifiable safety framework  

  • WMG and Wayve create first system-agnostic framework to improve AI safety

  • Closing the AI safety gap is critical to the real-world deployment of autonomous vehicles globally

  • WMG Professor presents framework to the United Nations Economic Commission for Europe (UNECE)

Experts at WMG, University of Warwick and Wayve – a leading AI technology developer – have created the first system-agnostic framework designed to bring a standardised, scientific approach to the testing of datasets for self-driving vehicles.


The framework, entitled Operational Design Domain (ODD)-based AI Safety In autonomouS Systems (OASISS), was presented to international policy makers and regulators at the UNECE Working Party on Automated/Autonomous and Connected Vehicles (GRVA)’s 23rd session by WMG’s Head of Safe Autonomy, Professor Siddartha Khastgir.

Autonomous systems, such as self-driving technology, rely on AI powered scenario data to learn to navigate and handle real-world situations. The OASISS framework, supported by high-level regulations and standards, can evaluate AI datasets to ensure self-driving systems can effectively handle potential situations encountered during real-world deployment.

The framework will determine whether a self-driving system or product is safe enough to operate on in real-world using scientific evidence. It will also help technology developers to uncover the areas that their AI system overlooks and improve their training and testing.

Professor Siddartha Khastgir presented to international policy makers and regulators at the UNECE Working Party on Automated/Autonomous and Connected Vehicles (GRVA)’s 23rd session
Professor Siddartha Khastgir presented at the UNECE Working Party on Automated/Autonomous and Connected Vehicles (GRVA)’s 23rd session.

The OASISS framework follows a three-step process.

 1. Completeness 

  • Evaluate if the testing and training scenario datasets have considered both the operational design domain and the operational conditions that the system is likely to encounter.
  • For example, the self-driving system is not designed to operate in snowy weather; however, it is intended to operate in London, where it occasionally snows. The OASISS framework will check if snowy scenarios are included in its testing and training datasets.
  • Ensure the system considers the real-world possibilities, checks if it can work beyond its operational capability, and can handle additional situations safely.

  2. Representativeness 

  • Check if the testing and training datasets contain related scenarios based on the frequency of the situations happening.
  • For example, if the system will be operating in a rainy area, the OASISS framework will evaluate if the dataset adequately represents the rainfall distribution over time in the area of deployment

 3. Acceptability argument 

  • After the first two-step evaluation, technology developers can provide evidence to justify why certain OASISS requirements are not met and why their system is still safe to operate.
  • Given that the OASISS framework is system-agnostic, the justification process enables developers to demonstrate their system’s possible shortcomings and give clarity to regulatory authorities.

The OASISS framework is a part of the DriveSafeAI, a £1.9m research project to develop scalable mechanisms and methodologies to prove that AI is safe to use in self-driving vehicles.

Read the paper in full here: https://wrap.warwick.ac.uk/id/eprint/192034/Link opens in a new window 

Find out more about WMG’s Safe Autonomy research here: Safe Autonomy Research Group | WMG | University of WarwickLink opens in a new window 

Let us know you agree to cookies