Skip to main content Skip to navigation

Assessing AI-Generated Questions' Alignment with Cognitive Frameworks in Educational Assessment

Project Overview

The document explores the integration of Bloom's Taxonomy with an AI-driven tool, OneClickQuiz, designed to automate the creation of multiple-choice questions (MCQs) in educational contexts. It details the effectiveness of different classification models in categorizing AI-generated questions based on the cognitive levels outlined in Bloom's Taxonomy, revealing that advanced models such as DistilBERT significantly surpass traditional techniques in accurately matching questions to higher-order cognitive skills. The findings underscore the potential of generative AI to enhance educational assessments by providing tailored and relevant questions that promote deeper learning. Additionally, the study highlights the importance of responsibly implementing AI technologies in education, ensuring that they align with established educational standards while also addressing ethical implications. Overall, the document advocates for the use of generative AI as a transformative tool in education, capable of improving assessment methods and promoting cognitive engagement among learners.

Key Applications

OneClickQuiz - AI-driven plugin for automating MCQ generation in Moodle

Context: Educational settings for generating quizzes for various courses, particularly in computer science.

Implementation: The plugin integrates Bloom's Taxonomy into its question generation process to improve alignment with cognitive objectives.

Outcomes: Improved question generation that reflects the hierarchical cognitive demands of Bloom's Taxonomy, with an overall validation accuracy of 91% using advanced models.

Challenges: Difficulty in generating questions targeted at higher-order cognitive skills such as Analysis and Evaluation.

Implementation Barriers

Ethical considerations

Concerns regarding bias, transparency, and equity in AI-generated content.

Proposed Solutions: Conduct thorough analyses of generated questions to ensure neutrality and inclusivity, and adjust prompt designs to mitigate identified biases.

Technical limitations

Traditional models struggle with higher-order cognitive levels, limiting their effectiveness in generating complex assessments.

Proposed Solutions: Adopt advanced AI models such as transformers, which can better capture complex cognitive tasks.

Project Team

Antoun Yaacoub

Researcher

Jérôme Da-Rugna

Researcher

Zainab Assaghir

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Antoun Yaacoub, Jérôme Da-Rugna, Zainab Assaghir

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies