Skip to main content Skip to navigation

AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling

Project Overview

The document explores the transformative role of generative AI in education, focusing on Open Learner Modelling (OLM) as a key component of Intelligent Tutoring Systems (ITS). It underscores the necessity for interpretability and explainability in AI models to foster personalized learning experiences. By effectively modeling students' knowledge, abilities, and emotional states, generative AI can deliver customized support and feedback, ultimately enhancing educational outcomes. The document presents various applications of OLM that demonstrate how AI can facilitate a more adaptive learning environment. However, it also addresses significant challenges, notably the issues of trust and user control associated with AI systems, which must be navigated to ensure successful integration in educational contexts. Overall, the findings highlight the potential of generative AI to revolutionize the way learners engage with educational content while simultaneously emphasizing the need for clear communication and transparency in AI-driven solutions.

Key Applications

AI-based Learning Support Systems

Context: Various educational contexts including job interview coaching for young people at risk of social exclusion, improving self-assessment and decision-making in problem selection, and providing exploratory learning environments where students build models of their knowledge.

Implementation: Utilizes AI technologies such as Bayesian Knowledge Tracing and AI virtual agents to create interactive learning experiences. The systems simulate real-world scenarios, provide feedback based on user interactions, and visualize knowledge progress through skill assessments and targeted support.

Outcomes: Enhances student confidence, self-reflection skills, and overall learning outcomes. Users show significant improvement in interview behaviors and decision-making abilities.

Challenges: Requires precise interpretation of user interactions for effective feedback and maintaining user trust in the system.

Implementation Barriers

Trust

Users may distrust their own judgments and the AI system's assessments.

Proposed Solutions: Provide targeted support and clear explanations of how the AI makes decisions.

Complexity

Educational contexts can be complex and require nuanced understanding of student behaviors.

Proposed Solutions: Utilize machine learning to model complex student behaviors and provide individualized support.

Interpretability

The need for AI models to be interpretable and explainable to foster user trust.

Proposed Solutions: Develop frameworks for interpretable AI and involve users in model adjustments.

Project Team

Cristina Conati

Researcher

Kaska Porayska-Pomsta

Researcher

Manolis Mavrikis

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Cristina Conati, Kaska Porayska-Pomsta, Manolis Mavrikis

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies