Skip to main content Skip to navigation

Generative AI for Multiple Choice STEM Assessments

Project Overview

The document explores the application of generative AI in enhancing STEM education through the creation of multiple choice assessments, specifically utilizing the M¨ obius platform. It emphasizes the capability of generative AI to generate credible distractors that not only engage students but also encourage deeper learning by addressing common misconceptions. While the potential benefits are significant, the paper also identifies challenges related to ensuring mathematical accuracy and the semantic integrity of the AI-generated content. To address these concerns, it suggests various methods aimed at maintaining pedagogical rigor and the validity of assessments. Overall, the findings indicate that, despite the challenges, generative AI holds promise for enriching educational assessment practices and facilitating improved learning outcomes in STEM disciplines.

Key Applications

M¨ obius platform for online instruction

Context: Higher education-level mathematics assessments, targeting educators and students in STEM disciplines.

Implementation: Generative AI is used to create multiple choice questions with plausible distractors, guided by structured prompts that align with a semantic math engine.

Outcomes: Reduction in time and effort for creating robust teaching materials, maintaining academic rigor, and enhancing assessment validity.

Challenges: Ensuring mathematical accuracy, dealing with 'hallucinations' in AI outputs, and validating the quality of generated distractors.

Implementation Barriers

Technical Limitations

Generative AI struggles with mathematical reasoning, precision, and validating multi-step reasoning problems.

Proposed Solutions: Utilize constrained assessment formats like multiple choice questions, where outputs can be validated by subject matter experts.

Pedagogical Risks

Poorly designed distractors may mislead students or reinforce errors instead of clarifying misconceptions.

Proposed Solutions: Implement rigorous review and validation processes for AI-generated distractors by educators.

Project Team

Christina Perdikoulias

Researcher

Chad Vance

Researcher

Stephen M. Watt

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Christina Perdikoulias, Chad Vance, Stephen M. Watt

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies