An Empirical Comparison of Deep Learning Models for Knowledge Tracing on Large-Scale Dataset
Project Overview
The document explores the application of generative AI in education, particularly through deep learning models designed for Knowledge Tracing (KT), which predicts student performance based on their interaction data. It highlights the effectiveness of advanced models such as Deep Knowledge Tracing (DKT), Dynamic Key-Value Memory Network (DKVMN), Self-Attention for Knowledge Tracing (SAKT), and Relation-aware Self-Attention for Knowledge Tracing (RKT) in analyzing extensive datasets on student performance. The findings suggest that by integrating contextual information and understanding student forget behavior, these models significantly enhance the accuracy of performance predictions. This approach not only aids educators in tailoring instruction to individual learning needs but also optimizes learning outcomes by providing timely insights into student progress, thereby demonstrating the transformative potential of generative AI in creating adaptive and personalized educational experiences.
Key Applications
Deep Knowledge Tracing (DKT), Dynamic Key-Value Memory Network (DKVMN), Self-Attention for Knowledge Tracing (SAKT), Relation-aware Self-Attention for Knowledge Tracing (RKT)
Context: Educational context involving personalized learning and feedback for students based on their past interactions with learning materials.
Implementation: Models were implemented using large-scale datasets from student interactions, utilizing various deep learning techniques to analyze performance and predict future outcomes.
Outcomes: RKT consistently outperformed other models by effectively capturing exercise relationships and modeling forget behavior, leading to improved prediction of student performance.
Challenges: Complexity of models and the need for large datasets can be a barrier to implementation in smaller educational settings.
Implementation Barriers
Technical
The complexity of implementing deep learning models may hinder adoption in educational institutions with limited resources.
Proposed Solutions: Simplifying model architectures or developing user-friendly tools that require less technical expertise could facilitate wider adoption.
Data Availability
Models require large-scale datasets for effective training and validation, which may not be available in all educational settings.
Proposed Solutions: Collaborating with educational platforms to access data or creating synthetic datasets could help overcome this challenge.
Project Team
Shalini Pandey
Researcher
George Karypis
Researcher
Jaideep Srivastava
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Shalini Pandey, George Karypis, Jaideep Srivastava
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai