Skip to main content Skip to navigation

Explainable Student Performance Prediction With Personalized Attention for Explaining Why A Student Fails

Project Overview

The document explores the application of generative AI in education through the development of the Explainable Student performance prediction method with Personalized Attention (ESPA), which aims to improve the prediction of student outcomes. It highlights the importance of explainability in these predictions, allowing educators to implement timely interventions. By utilizing a Bidirectional Long Short-Term Memory (BiLSTM) architecture along with attention mechanisms, ESPA effectively analyzes student data to generate interpretable predictions. The findings demonstrate that this approach surpasses existing prediction models, providing deeper insights into the factors that influence student performance. Overall, the use of generative AI in this context not only enhances predictive accuracy but also supports educators in fostering better educational outcomes through targeted assistance.

Key Applications

Explainable Student performance prediction method with Personalized Attention (ESPA)

Context: Higher education, targeting educators and institutions seeking to predict student performance and identify at-risk students.

Implementation: Utilizes relationships in student profiles and course knowledge graphs, employing a BiLSTM architecture and attention mechanisms to predict student outcomes.

Outcomes: Outperforms state-of-the-art models in predicting student performance while providing explainable insights into predictions.

Challenges: Existing methods often lack explainability, making it difficult for educators to trust predictions and intervene appropriately.

Implementation Barriers

Technical Barrier

The complexity of processing and analyzing large volumes of heterogeneous educational data.

Proposed Solutions: Utilization of advanced deep learning architectures (e.g., BiLSTM) and attention mechanisms to effectively process data.

Explainability Barrier

The lack of transparency in traditional prediction models, which are often seen as 'black boxes'.

Proposed Solutions: Incorporating explainable AI techniques within the model to clarify how predictions are made, enhancing educator trust.

Project Team

Kun Niu

Researcher

Xipeng Cao

Researcher

Yicong Yu

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Kun Niu, Xipeng Cao, Yicong Yu

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies