Generating AI Literacy MCQs: A Multi-Agent LLM Approach
Project Overview
The document explores the innovative application of generative AI in education, specifically focusing on the creation of multiple-choice questions (MCQs) for assessing AI literacy in K-12 settings through large language models (LLMs). It addresses the current scarcity of scalable and effective AI literacy resources and underscores the critical need to equip students with the necessary skills for an increasingly AI-centric future. The proposed system leverages a multi-agent framework that generates and refines assessment questions based on user input, facilitating a tailored learning experience. Initial evaluations indicate significant interest from educators in adopting these LLM-generated MCQs, suggesting a positive reception and potential impact on enhancing AI literacy education. This initiative reflects a broader trend towards integrating advanced AI technologies into educational practices, aiming to improve the quality and accessibility of learning materials in response to the evolving demands of the digital age.
Key Applications
Multi-Agent MCQ Generation System
Context: K-12 education focusing on AI literacy assessments
Implementation: Utilizes user inputs (learning objectives, Bloom's Taxonomy levels) to generate MCQs through a workflow involving critique agents to ensure quality.
Outcomes: Generated questions were found to be clear, relevant, and generally met pedagogical standards, with high agreement among experts on their quality.
Challenges: Discrepancies in expert evaluations regarding question suitability for different grade levels and Bloom's taxonomy levels.
Implementation Barriers
Scalability and Quality Assurance
There is a lack of scalable and reliable AI literacy materials and assessment resources, along with potential subjectivity in expert assessments of the generated questions.
Proposed Solutions: Developing LLM-generated MCQs to provide accessible assessment resources, alongside future work that could include classroom trials, student performance data analysis, and more diverse assessment formats.
Project Team
Jiayi Wang
Researcher
Ruiwei Xiao
Researcher
Ying-Jui Tseng
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Jiayi Wang, Ruiwei Xiao, Ying-Jui Tseng
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai