Skip to main content Skip to navigation

Generative Adversarial Networks for Imputing Sparse Learning Performance

Project Overview

The document explores the transformative role of generative AI in education, particularly through the use of Generative Adversarial Networks (GANs) and the Generative Adversarial Imputation Networks (GAIN) framework. It addresses the critical issue of data sparsity in Intelligent Tutoring Systems (ITSs), which can hinder accurate assessment of learner performance. By employing GAIN for imputing missing data, the study reveals significant improvements over traditional data imputation methods, thereby enhancing the tracking and evaluation of learner progress. This advancement underscores the potential of generative AI to refine educational analytics and modeling, ultimately leading to more effective teaching and learning strategies. The findings suggest that accurately representing learning performance data is essential for fostering personalized education and optimizing educational outcomes, highlighting generative AI's promising impact on the future of educational technology.

Key Applications

Generative Adversarial Imputation Networks (GAIN)

Context: Data imputation in Intelligent Tutoring Systems (ITSs) for tracking learner performance

Implementation: Using a customized version of GAIN to impute sparse learning performance data represented as a 3D tensor

Outcomes: Improved imputation accuracy compared to traditional methods, enabling better tracking of learner progress and performance assessment.

Challenges: Data sparsity, complexities of learning performance data, and the need for model stability and tuning.

Implementation Barriers

Technical Barrier

Challenges related to data sparsity and the complexities of representing learning performance data. Additionally, variability in GAIN's performance across different datasets and attempts can lead to less consistent results.

Proposed Solutions: Utilizing advanced generative models like GAIN, refined architectures, exploring additional datasets for enhanced performance, and implementing additional tuning or pre-processing to stabilize the model's performance.

Project Team

Liang Zhang

Researcher

Mohammed Yeasin

Researcher

Jionghao Lin

Researcher

Felix Havugimana

Researcher

Xiangen Hu

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Liang Zhang, Mohammed Yeasin, Jionghao Lin, Felix Havugimana, Xiangen Hu

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies