Skip to main content Skip to navigation

AI in Education: Dialogue and Formative Feedback

Summary

AI can help us to rethink and reframe the roles and approaches of educators, students, and tools in the learning process. Our group (in the WIHEA Learning Circle on Artificial Intelligence in Education) focused on the use of AI in marking and providing feedback on student work, with a particular focus on formative rather than summative work. We discussed the issues currently faced by educators and students with regard to the marking/feedback process, and explored opportunities and dangers presented by AI.

On this page, you will find:

These materials are also presented in the Final Report published by the AI in Education Learning Circle.

N.B. Please be aware that programmes such as ChatGPT may save and use the data you feed into them. Student work should not be submitted to these programmes without the students’ express consent.

Background and key findings

AI presents an opportunity to rethink the students’ learning process, and the ways in which educators and tools support that process. In the evolving conversations about the use of AI for assessment and feedback, some see this as a potential opportunity for a cognitive revolution that will re-define processes and support structures in education. In this chapter, we will focus on the use of AI for generating dialogue and formative feedback.

Marking student work and providing constructive, consistent, timely feedback can be an extremely labour-intensive process for educators. Receiving and making use of this feedback can also be frustrating for students, especially if they feel the feedback is unfair or hard to act upon, or if they believe other students are receiving more (or better) feedback. These challenges are regularly reflected in the results of the National Student Survey.

Recent developments in AI may offer support in addressing these challenges. AI tools can, to some extent, summarise student work and even give qualitative evaluations, and could potentially be used to support the provision of personalised feedback to students in a timely manner.

However, these same tools are also a source of anxiety in terms of how they might impact the learning process and the dialogue between educators and students. AI platforms are still developing, meaning that there are serious risks of either over- or under-estimating their current and future capabilities. Feedback produced by AI may be inaccurate and misleading, but there are also concerns that AI may ‘replace’ human markers.

Ultimately, we argue that AI raises questions (about the dialogue between educators and students) that are challenging but productive. In this section of the report, we offer some reflections, recommendations, and resources in the hope that these will provide an accessible ‘way in’ to this topic for those who are relatively new to it. By engaging directly with AI, and by reflecting on the nature of this engagement, teachers and learners can work together to foster constructive dialogue, identify opportunities, and address potential risks.

Case-studies and use-cases

Our group has created a series of blog posts to explore and reflect upon specific ways of using AI in the context of formative marking and feedback. We encourage readers to submit their own reflections for inclusion. Please contact Lewis.Beer@warwick.ac.uk if you wish to discuss this.


Using AI for Formative Feedback: Current Challenges, Reflections, and Future Investigation (Matthew Voice). In a brief essay and accompanying presentation, a lecturer in Applied Linguistics tests the capabilities of ChatGPT in evaluating his own writing, and explores the challenges (for teachers and students) posed by Large Language Models.


AI Feedback Systems: A Student Perspective (Mara Bortnowschi). An undergraduate student in Warwick Medical School reflects on how AI might impact the student experience. How are the new platforms different from more familiar AI feedback systems? How should students make use of AI feedback? And how can universities manage this conversation responsibly to counteract media-fuelled anxieties?


Who Uses AI, How, and When? (Matthew Voice). Following on from the previous post, Matthew Voice raises important questions about the various motives that might prompt students to use AI, and asks how universities can adopt a reasonable and realistic approach to supporting students’ engagement with these tools.


Using AI to Evaluate Film Criticism (Lewis Beer). This post explores ChatGPT’s ability to provide qualitative analysis on critical writing, using publicly available film reviews as ‘substitute essays’.

AI platforms for marking and feedback

Below are some recent examples of AI programmes designed to facilitate feedback and marking. We present these in the same spirit as the other resources in this section: as interesting and specific examples of how AI is being used, to help prompt discussion among educators and students.


AI Essay Analyst (in-house Warwick). See the article, Pedagogic paradigm 4.0: bringing students, educators and AI together (Fischer, Times Higher Education), and a blog post with further technical details on the Jisc National Centre for AI website. The tool provides students with formative feedback following voluntary submission of essays or dissertations. The tool is intended to augment the feedback process rather than fully automating it, and to ‘level the playing field by giving all students the opportunity to receive formative feedback.


Graide (University of Birmingham). See the programme website and the article, Artificial intelligence offers solution to heavy marking loads (University of Birmingham website). This programme grew out of a PhD thesis in the University of Birmingham’s School of Physics and Astronomy. It is designed to learn an assessor’s marking style and assess not only final answers, but also the student’s workings. The AI aims to mimic the assessor when they are most alert and attentive to detail, not when they are fatigued from a long period of marking, and so represent the ‘best’ version of that assessor.


PhysWikiQuiz (University of Konstanz, FIZ Karlsruhe, NII Tokyo, University of Göttingen). See the programme website and the article, Collaborative and AI-aided Exam Question Generation using Wikidata in Education (Scharpf et al.). PhysWikiQuiz is a physics question generation and test engine. It provides a variety of different questions for a given physics concept, retrieving formula information from Wikidata, correcting the student’s answer, and explaining the solution with a calculation path. Two of the purported selling points of this programme are: 1) that it gives each student a unique set of questions and 2) that it draws upon a community of expertise (collated on Wikidata). It is thereby intended to be more learner-centred and expert-led than non-AI-generated forms of assessment.


Teachermatic (Geoff Elliott, Oliver Stearn, Peter Kilcoyne). The Teachermatic website and app, founded in 2022 by e-learning specialists, aims to provide educators with tailored AI generators to help them produce learning materials. Most of the tools advertised do not directly support marking and feedback, but they can generate assessment rubrics and, most interestingly, SMART goals that are tailored to a learner’s specific challenge or objective. Services like this could potentially be used in creating formative assessment tasks and supporting the educator/student dialogue surrounding these tasks.

Recommendations for educators and students

The resources we present here do not represent definitive, future-proofed guidance about the use of AI for formative marking and feedback. However, our group has agreed on some key recommendations – applicable to both educators and students – to support constructive engagement with AI-generated feedback.

  • Question assumptions about what AI can or cannot do. Beware of accepting AI-generated feedback at face-value, and always examine it with a critical eye. At the same time, beware of making assumptions about AI’s limitations: what are these assumptions based on, and have they been rigorously (and recently) tested?
  • Reflect on preconceptions about the learning process. How do you understand the process by which you and others learn, or by which we produce scholarly ‘work’? What do you think about the role of writing in education? What do markers do when they assess students’ work? What is the optimal timing for the formative and summative assessment process? And how might AI challenge these beliefs?
  • Use AI to support pedagogical innovation. Consider how new AI platforms might enhance innovative practices such as peer-to-peer learning, co-creation, student-devised assessments, online and dialogue-based assessments, and various forms of collaboration. Might the rise of AI open up conversations about other (non-AI) forms of technology-enhanced learning, which may now seem more accessible to educators and students?
  • Request top-down support and resources. Universities must enable students and academics to engage with AI-generated feedback in a competent, responsible manner, and to develop ‘using AI’ as a skill. We all have a role to play in making the case for these support structures, and in co-creating them to ensure they are mutually beneficial for all members of the educational community.

Useful links

The following reflective pieces by educators are useful entry points for those new to AI:

  • The potential of artificial intelligence in assessment feedback (Elizabeth Ellis, Times Higher Education). Among other interesting points, Ellis notes the potential for AI as a tool for commenting on the ‘technicalities’ of academic work (referencing, institutional processes, etc.), and as a tool for providing feedback in the form of ‘structured questions’ that prompt dialogue, rather than ‘judgements’ handed down to the student.
  • Facing facts: ChatGPT can be a tool for critical thinking (Nathan M. Greenfield, University World News). Greenfield, an English professor, reflects on universities’ traditional reliance on the ‘literary essay’ as a mode of assessment, how ChatGPT may affect this, and how educators might respond. He also provides an accessible account of how ChatGPT was developed and how its mode of thinking compares with the workings of the human mind.
  • Four lessons from ChatGPT: Challenges and opportunities for educators (Centre for Teaching and Learning, University of Oxford). This article, published in January 2023, aims to synthesise discussions about ChatGPT up to that point, and to summarise key points for educators to consider. It offers a clear and balanced take on issues such as plagiarism, ChatGPT’s potential as an educational tool, and the broader landscape of AI platforms. The article also contains many links to other think-pieces and resources, including EduTools and FutureTools (linked below).
  • More AI news: Incremental but meaningful progress - From models to products (Dominik Lukeš, LinkedIn). Lukeš discusses how plugins will enhance ChatGPT’s capabilities, but sounds a note of caution regarding the platform’s hallucinations. He also shows how AI can enable people to learn new skills faster than ever. These reflections are useful for educators grappling with the accuracy and inaccuracy of AI-generated material, or exploring how AI can be used as an educational tool.
  • A First Response to Assessment and ChatGPT in your Courses (Lorelei Anselmo, Tyson Kendon, and Beatriz Moya at the Taylor Institute for Teaching and Learning at the University of Calgary).

For those who want to explore this topic in more depth, the following platforms offer a wealth of resources and guidelines:

  • EduTools. A collection of tools and models related to education, assembled by Dominik Lukeš. This resource was set up following the release of ChatGPT, but includes a very wide range of links and tools, beyond those that use AI, that may be useful to teachers and students. There are also many links to online guides and think-pieces about AI tools.
  • FutureTools. This site, run by Matt Wolfe, maintains up-to-date lists of tools based on AI technologies and articles about AI. It also provides a glossary of key terms and links to Wolfe’s YouTube channel.
  • PromptEngineering.org and Prompt Engineering Guide. The phrase ‘prompt engineering’ refers to the effective use of prompts when using AI language models such as ChatGPT. These two websites are intended to help users improve their interactions with AI.