Skip to main content Skip to navigation

Automated Bias Assessment in AI-Generated Educational Content Using CEAT Framework

Project Overview

The document explores the advancements in Generative Artificial Intelligence (GenAI) and its transformative applications in education, particularly in developing tutor training materials. It addresses significant ethical concerns regarding biases in AI-generated content, including issues related to gender and racial stereotypes. To tackle these challenges, the study introduces an innovative automated bias assessment method, employing the Contextualized Embedding Association Test (CEAT) within a Retrieval-Augmented Generation (RAG) framework. The findings reveal a strong correlation between automated and manual bias assessments, demonstrating the effectiveness, reliability, and scalability of this method for evaluating biases in educational materials. Overall, the document underscores the potential of GenAI in enhancing educational resources while emphasizing the necessity of ethical considerations in its implementation.

Key Applications

Automated Bias Assessment using CEAT framework

Context: Assessment of AI-generated educational content for tutor training

Implementation: Integration of CEAT with prompt-engineered word extraction within a RAG framework

Outcomes: High alignment between automated bias assessment and manually curated word sets; enhanced fairness, scalability, and reproducibility in bias auditing.

Challenges: Initial reliance on limited datasets and the need for broader validation across various educational contexts.

Implementation Barriers

Ethical Concerns and Bias Mitigation

Biases embedded in AI-generated content can reinforce harmful stereotypes, compromise educational equity, and the current approach focuses solely on bias detection without addressing how to mitigate identified biases.

Proposed Solutions: Automated bias detection and assessment methods to proactively identify and address biases in educational materials, along with exploration of bias mitigation strategies in model training phases and post-training interventions.

Implementation Limitations

Current validation relies on a limited dataset of AI-generated texts, limiting generalizability.

Proposed Solutions: Broader validation across various contexts and larger-scale case studies in real classroom settings.

Project Team

Jingyang Peng

Researcher

Wenyuan Shen

Researcher

Jiarui Rao

Researcher

Jionghao Lin

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Jingyang Peng, Wenyuan Shen, Jiarui Rao, Jionghao Lin

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies