Does Multiple Choice Have a Future in the Age of Generative AI? A Posttest-only RCT
Project Overview
The document explores the application of generative AI in education, particularly focusing on a study that compares multiple-choice questions (MCQs) and open-response questions in the context of tutor training for advocacy skills. The research indicates that while MCQs are efficient and can yield comparable learning outcomes to open responses, they may lack the capacity to foster deeper understanding. Generative AI models, specifically GPT-4, are employed for the automated grading of open responses, demonstrating a degree of proficiency; however, the study points out the need for further refinement to enhance their ability to assess nuanced answers effectively. Overall, the findings underscore the promise of generative AI in educational settings, particularly for scaling tutor training and improving the assessment process, while highlighting areas for improvement in AI's evaluative capabilities.
Key Applications
Automated grading of open-response questions using GPT-4 models
Context: Tutor training in advocacy skills for undergraduate college students
Implementation: Generative AI models (GPT-4o and GPT-4-turbo) were used to evaluate tutor responses to open-ended questions in a tutoring program.
Outcomes: The use of generative AI showed potential for efficient and scalable assessment of tutor performance, providing timely feedback.
Challenges: Limitations include potential inaccuracies in grading, such as generating nonsensical outputs or biases in assessments.
Implementation Barriers
Technical Barrier
Generative AI models can produce nonsensical or factually incorrect outputs, which may affect grading accuracy. Traditional grading methods are resource-intensive and limit the ability to scale tutor training effectively.
Proposed Solutions: Further refinement of generative AI models and prompt engineering techniques to improve output reliability. Using generative AI for automated grading to enhance scalability and efficiency in tutor training.
Project Team
Danielle R. Thomas
Researcher
Conrad Borchers
Researcher
Sanjit Kakarla
Researcher
Jionghao Lin
Researcher
Shambhavi Bhushan
Researcher
Boyuan Guo
Researcher
Erin Gatz
Researcher
Kenneth R. Koedinger
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Danielle R. Thomas, Conrad Borchers, Sanjit Kakarla, Jionghao Lin, Shambhavi Bhushan, Boyuan Guo, Erin Gatz, Kenneth R. Koedinger
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai