Skip to main content Skip to navigation

Interpreting Deep Knowledge Tracing Model on EdNet Dataset

Project Overview

The document explores the use of generative AI in education, particularly through the lens of deep learning techniques like Deep Knowledge Tracing (DLKT) models applied to the EdNet dataset. It highlights the importance of interpretability in these models, showcasing the effectiveness of techniques such as Layer-wise Relevance Propagation (LRP) in enhancing understanding of the model's decision-making processes. The findings reveal that these interpretative approaches can significantly improve the transparency of DLKT models, which is crucial for their adoption in educational contexts. However, the research acknowledges that there are still challenges to overcome, particularly concerning the handling of sequence lengths and the hierarchical nature of educational data. Overall, the document underscores the potential of generative AI to revolutionize knowledge tracing in education, while also calling for further investigation into optimizing these models for better interpretability and effectiveness.

Key Applications

Deep Knowledge Tracing (DLKT) models using EdNet dataset

Context: Educational context focusing on learner modeling for AI tutoring services; target audience includes educators and researchers in educational technology.

Implementation: Built DLKT model using a large dataset (EdNet) and applied LSTM units to handle sequential data.

Outcomes: Achieved effective interpretation of model predictions and demonstrated the potential of the LRP method for understanding learner knowledge states.

Challenges: Interpretability issues of DLKT models hinder their practical application; large dataset size introduces complexity in analysis.

Implementation Barriers

Technical Barrier

Lack of interpretability in DLKT models impedes practical applications.

Proposed Solutions: Adoption of post-hoc interpreting methods like Layer-wise Relevance Propagation (LRP) to enhance model transparency.

Project Team

Deliang Wang

Researcher

Yu Lu

Researcher

Qinggang Meng

Researcher

Penghe Chen

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Deliang Wang, Yu Lu, Qinggang Meng, Penghe Chen

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies