Skip to main content Skip to navigation

Exploring the use of AI in mathematics and statistics assessments

About the Project

The mathematical sciences and operational research (MSOR) community in HE is poorly prepared to adapt to the rapid rise of AI. Whilst in-person exams remain an essential assessment mode for our subjects, take-home assignments are also an integral assessment tool. However, our current assignments are not robust against AI, and staff have no time to capture ways in which AI might be used to enhance learning.

Recent AI studies in HE tend to cover a broad range of subjects without the specific needs of mathematical sciences being properly considered. According to the QAA benchmark for MSOR, students “can sometimes be expected to provide an answer which is very close to a model answer”, whilst AI can often provide such model answers.

Project Information

This project aimed to address these issues and make suggestions, specific to MSOR subjects, on how maths assessments could be adapted to work with AI whilst still achieving excellent learning outcomes. The project team investigated assignments in their respective departments, tackling the following questions:

  1. How well can AI perform in our current assignments?
  2. To what extent do our students currently use AI to help with assignments?
  3. Can future assessments incorporate AI as a copilot, whilst still achieving module outcomes?

Based on our findings and further surveys of how maths assessments are changing in other universities, we suggested how AI might be integrated in future assessments. We aimed to develop concrete examples of AI-assisted take-home maths assignments, which are available in the 'results' box, and we will disseminate our findings further in seminar talks and via a journal article.

Project Team

  • Siri Chongchitnan (maths, project lead)
  • Martyn Parker (statistics, project lead)
  • Mani Mahal (research intern)
  • Sam Petrie (research intern)


Project Overview

The mathematical sciences and operational research (MSOR) community in higher education is unprepared for the rapid rise of AI. While in-person exams are crucial, take-home assignments, essential to our assessment process, are vulnerable to AI. Current assignments lack robustness against AI, and educators lack the time to explore AI's potential to enhance learning.

Recent AI studies in higher education often generalise across subjects, overlooking the specific needs of mathematical sciences. According to the QAA benchmark for MSOR, students are expected to provide answers that closely resemble model solutions, which AI can readily generate.

Research Areas

1. To what extent do our students currently use AI to help with assignments?

This area investigates the prevalence of AI usage among students and how it influences their approach to coursework.

Student Conversations on AI Usage

2. How well does AI perform in current assignments?

We are evaluating the capabilities of AI tools like ChatGPT in handling university-level assignments to understand their effectiveness and limitations.

How well does ChatGPT perform in university assignments?

3. Can future assignments incorporate AI as a co-pilot while still achieving module outcomes?

We are exploring innovative ways to integrate AI into assignments that enhance learning while maintaining academic integrity.

Further to this, opportunities that generative AI presents in education for module leaders, as well as learning opportunities for students have been present but not explored in depth.

Formal Report

AI in Mathematics and Statistics Education: Recommendations and Future Directions of Regulations

Opportunities AI Presents

Critically Analysing AI Outputs – Case study on LLMs

Tips for Spotting AI Usage in Assignments