Skip to main content Skip to navigation

Assessing the Auditability of AI-integrating Systems: A Framework and Learning Analytics Case Study

Project Overview

The document explores the incorporation of generative AI in education, particularly through Learning Analytics (LA) systems, emphasizing the importance of auditability and ethical considerations. It proposes a framework to evaluate the auditability of AI-driven LA systems, stressing the necessity for verifiable claims, accessible evidence, and validation methods. The discussion includes case studies, such as Moodle's dropout prediction system and a research prototype, showcasing how AI can enhance insights into student behavior and identify potential dropout risks. While highlighting the advantages of these AI applications, the document also addresses challenges including ethical compliance, data access, and the need for greater system transparency. Overall, it underscores the transformative potential of generative AI in education while advocating for responsible practices that ensure fairness and accountability.

Key Applications

Dropout Prediction System

Context: Applicable in higher education and K-12 classes, including introductory computer science courses, targeting educators and institutions as well as researchers.

Implementation: This AI-based dropout prediction system integrates machine learning techniques, utilizing both log data and self-reported student data to predict student dropouts. The approach enhances accuracy by analyzing various indicators and patterns of student engagement.

Outcomes: Aims to reduce dropout rates and improve learning outcomes by identifying at-risk students early for proactive support.

Challenges: Issues include limited auditability due to incomplete documentation, inadequate monitoring capabilities, lack of accessible test data, and the absence of APIs and monitoring tools, necessitating expert users for effective auditing.

Implementation Barriers

Technical barrier

Inaccessible and inadequate monitoring capabilities hinder effective auditing of AI systems in education.

Proposed Solutions: Enhancing system access for auditors and implementing robust logging and monitoring tools.

Documentation barrier

Incomplete documentation of AI system functionality and performance limits the ability to verify claims.

Proposed Solutions: Improving documentation standards and integrating comprehensive reporting mechanisms.

Ethical barrier

Concerns regarding bias and fairness in AI predictions, potentially affecting disadvantaged groups.

Proposed Solutions: Implementing frameworks to assess and mitigate algorithmic biases and enhance fairness in AI systems.

Project Team

Linda Fernsel

Researcher

Yannick Kalff

Researcher

Katharina Simbeck

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Linda Fernsel, Yannick Kalff, Katharina Simbeck

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies