Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning
Project Overview
The document explores the application of generative AI in education, specifically focusing on the automation of creating distractors and feedback for multiple-choice questions (MCQs) in mathematics. It addresses the difficulties educators face in manually developing effective distractors that enhance assessment quality. By leveraging large language models (LLMs), the study proposes a method for generating distractors and accompanying feedback, aiming to streamline the assessment process. While the findings indicate promising potential for AI-driven automation in improving educational assessment tools, the document also emphasizes that substantial challenges persist, particularly in maintaining the quality and relevance of the AI-generated content. Overall, the integration of generative AI in this context could transform assessment practices, although careful consideration is required to ensure effectiveness and educational integrity.
Key Applications
Automated distractor and feedback generation for Math MCQs
Context: Targeted towards students aged 10-13, in educational settings utilizing MCQs for assessment.
Implementation: Using large language models (LLMs) and in-context learning to generate distractors and feedback based on a dataset of math MCQs.
Outcomes: Improved efficiency in creating high-quality distractors and feedback, aiding teachers in assessment design.
Challenges: Quality of generated distractors and feedback may vary; ensuring they align with student misconceptions is challenging.
Implementation Barriers
Quality Control
Ensuring the generated distractors and feedback accurately reflect student misconceptions and provide helpful guidance.
Proposed Solutions: Proposed to explore alternative approaches beyond LLM prompting and improve text encoding methods aligned with student errors.
Scalability
Manual crafting of high-quality distractors is labor-intensive and limits scalability.
Proposed Solutions: Automating the generation process using generative AI to enhance scalability.
Project Team
Hunter McNichols
Researcher
Wanyong Feng
Researcher
Jaewook Lee
Researcher
Alexander Scarlatos
Researcher
Digory Smith
Researcher
Simon Woodhead
Researcher
Andrew Lan
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Hunter McNichols, Wanyong Feng, Jaewook Lee, Alexander Scarlatos, Digory Smith, Simon Woodhead, Andrew Lan
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai