Skip to main content Skip to navigation

iLLuMinaTE: An LLM-XAI Framework Leveraging Social Science Explanation Theories Towards Actionable Student Performance Feedback

Project Overview

The document explores the application of generative AI in education, specifically through the iLLuMinaTE framework, which leverages large language models (LLMs) and explainable AI (XAI) to deliver personalized, actionable feedback to students in online courses. By generating natural language explanations based on student behavioral data, iLLuMinaTE aims to enhance both the understandability and trustworthiness of AI systems for educators and learners, addressing the critical issue of model explainability in educational contexts. Evaluative findings indicate that students favor the explanations provided by iLLuMinaTE over traditional feedback methods, suggesting a positive impact on learning outcomes. Furthermore, the document emphasizes the significance of engagement metrics, such as video clicks and session participation, in predicting student success and customizing feedback. It also highlights the integration of established frameworks like Hattie's feedback model and Grice's maxims to facilitate effective communication and enhance instructional strategies, ultimately aiming to improve student understanding and academic performance through the informed use of generative AI.

Key Applications

Personalized Feedback and Explanations

Context: Online courses for university students, including those who may be struggling or are high achievers. The implementations focus on enhancing student engagement and understanding through personalized interactions based on student data.

Implementation: The framework utilizes AI models, including Large Language Models (LLMs) and Explainable AI (XAI) techniques, to analyze student engagement and interaction data. It generates tailored explanations and feedback, presenting them through a structured pipeline that emphasizes causal connections and personalized learning pathways.

Outcomes: Students preferred the AI-generated explanations over traditional methods 89.52% of the time, leading to improved engagement, better understanding of course material, and enhanced performance through actionable feedback.

Challenges: Challenges include variability in the quality of generated explanations, ensuring the accuracy of predictions based on engagement metrics, and maintaining student motivation throughout the learning process.

Implementation Barriers

Technical Barrier

The need for explainable AI methods that are understandable to non-technical users, such as educators and students.

Proposed Solutions: Develop frameworks like iLLuMinaTE that utilize LLMs to enhance the interpretability of AI outputs.

User Acceptance Barrier

Educators and students may distrust AI-based educational technologies due to lack of transparency.

Proposed Solutions: Implement transparent models and training programs to foster trust in AI technologies.

Technical Barrier

The complexity of accurately predicting student success based on engagement metrics can lead to unreliable outcomes.

Proposed Solutions: Continuous improvement of AI models through feedback loops and incorporating more diverse data sources to refine predictions.

Communication Barrier

Students may misinterpret AI-generated feedback due to lack of clarity or relevance.

Proposed Solutions: Adopting communication frameworks like Grice's maxims to ensure feedback is clear, relevant, and actionable.

Project Team

Vinitra Swamy

Researcher

Davide Romano

Researcher

Bhargav Srinivasa Desikan

Researcher

Oana-Maria Camburu

Researcher

Tanja Käser

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies