Skip to main content Skip to navigation

PhD in Verification and Validation of Responsible Generative AI in Automative

PhD in Verification and Validation of Safe Generative AI in Automative

Project Overview


As autonomous vehicles (AVs) transition from laboratories/test-tracks to public roads, ensuring their safety is paramount, as exemplified by the Cruise AV incident in October 2023. Utilising synthetic data to enable the training and virtual testing has been increasingly recognised as an effective practice for assuring AV safety. In addition to traditional simulators, Generative AI (GAI) is becoming a new way to generate synthetic data in the AV domain. However, how to ensure the responsible use of GAI for such safety-critical systems remains a key barrier, which is the question motivates this project.

In the scope of using GAI for training and testing AV perception components, we put forward the following research hypothesis: The effectiveness of GAI models in generating data for training and testing AV perception components hinges on adhering to key properties: robustness, explainability, fairness, privacy, and security. Each property must be clearly defined with measurable metrics and efficient estimation methods. For instance, robustness can be evaluated in terms of resilience to input variations, while explainability involves the model's decision-making transparency. Upon accurately verify these properties, targeted improvement methods can be proposed to enhance the GAI model in these specific areas. To validate this approach, the creation of a benchmark and the conduct of case studies are crucial. These would serve as a standard for evaluating and refining GAI models, ensuring they meet ethical standards and contribute to the development of safer and more responsible AV technologies.

To the best of our knowledge, there is no existing works like our innovation where verification and validation on GAI across multiple dimensions – robustness, explainability, fairness, privacy, and security – and specifically tailored for AV perception.

Our aim is to design a responsible GAI framework for AV perception, by implementing the following programme: 1) Developing a set of formally defined properties with metrics covering aspects such as robustness, explainability, fairness, privacy, and security, ensuring a comprehensive and holistic framework. 2) Establishing efficient verification methods/tools for accurately and reliably assessing those defined properties' metrics from diverse perspectives and scenarios; 3) constructing a benchmark with the defined properties, metrics, estimation tools, and a selection of AV perception models as a publicly accessible standard for validating the responsibility of GAI in AV perception context; 4) and conducting a case study to demonstrate the efficacy of the proposed framework with industrial partners.

As a PhD student, you will be involved in a cutting-edge research program at the intersection of Safety and Reliability, Software Engineering and Machine Learning. The project will involve probabilistic modelling, statistical inference, algorithm design and optimisation, empirical experiments on AI/ML models. The successful candidate will receive a competitive stipend and full tuition fee support for the duration of the 3.5-year PhD project. We look forward to welcoming a highly motivated and talented student to join our research team.

Essential and Desirable Criteria


Essential: At least a 2:1 degree in Undergraduate or Master's Degree in Machine Learning, Computer Science, Software Engineering, System Engineering, Statistics, Robotics, or related disciplines. Strong theoretical and experimental skills, along with a keen interest in interdisciplinary research.

Desirable: Prior experience in Safe AI techniques with publications will be advantageous.

Funding and Eligibility


The studentship is Available to eligible home fee status and UK domicile EU students with full awards for 3.5 years. Stipend at the UKRI rate and tuition fees will be paid at the UK rate.

To apply


To apply please complete our online enquiry form and upload your CV.

Please ensure you meet the minimum requirements before filling in the online form.

Key Information:

Funding Source: EPSRC

Stipend: UKRI standard stipend rate: £19,237 for 2024/25

Supervisor: Xingyu Zhao

Eligibility: Available to eligible Home fee status and UK domicile EU students

Start date: October 2024