Multiple-Choice Question Generation Using Large Language Models: Methodology and Educator Insights
Project Overview
The document explores the integration of Artificial Intelligence, particularly Large Language Models (LLMs), into educational settings, highlighting their role in automating the generation of Multiple-Choice Questions (MCQs). It assesses the effectiveness of three specific LLMs—GPT-3.5, Llama 2, and Mistral—demonstrating how these technologies can enhance educational practices by streamlining the creation of assessment materials. The findings suggest that generative AI has the potential to save time for educators and provide diverse question formats. However, the document also addresses significant challenges, including the reluctance of some educators to embrace new technologies and the potential risks linked to AI, such as the accuracy and appropriateness of generated content. Overall, while the application of generative AI in education offers promising outcomes in terms of efficiency and innovation, it necessitates careful consideration of the barriers to adoption and the ethical implications involved in its use.
Key Applications
Automated generation of Multiple-Choice Questions using LLMs
Context: Educational settings including high school and university environments, targeting educators who assess student knowledge.
Implementation: An experiment with 21 educators to compare the MCQs generated by GPT-3.5, Llama 2, and Mistral, using a specific prompt for LLMs to generate questions based on a provided text.
Outcomes: GPT-3.5 outperformed other models on clarity, coherence, compliance, and distractor selection metrics, indicating its effectiveness in generating quality MCQs.
Challenges: Reluctance of educators to adopt AI technologies, concerns about AI's reliability, privacy issues, and the potential for LLMs to produce hallucinated content.
Implementation Barriers
Technological
Educators' apprehension about integrating AI technologies into established teaching methodologies.
Proposed Solutions: Research into factors that encourage or discourage the acceptance of AI systems in education and initiatives like UNESCO's guidance for responsible AI use.
Ethical
Concerns regarding the ethical use of AI in education, including privacy issues and the presumption of infallibility.
Proposed Solutions: Establishing regulatory frameworks and promoting accountability and ethical integrity in educational AI applications.
Project Team
Giorgio Biancini
Researcher
Alessio Ferrato
Researcher
Carla Limongelli
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Giorgio Biancini, Alessio Ferrato, Carla Limongelli
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai