Skip to main content Skip to navigation

Interpret3C: Interpretable Student Clustering Through Individualized Feature Selection

Project Overview

The document explores the application of generative AI in education, specifically through the implementation of Interpret3C, an interpretable clustering methodology tailored for massive open online courses (MOOCs). This innovative approach tackles the complexities of clustering high-dimensional student behavior data by utilizing individualized feature selection via interpretable neural networks. By focusing on the diverse needs and behaviors of students, the study highlights the potential of this methodology to optimize curriculum design and targeted interventions. The findings indicate that Interpret3C enhances the interpretability and robustness of clustering results, providing valuable insights into student performance and engagement patterns. Ultimately, the document demonstrates how generative AI can significantly contribute to improving educational outcomes by fostering a deeper understanding of student dynamics and facilitating more effective teaching strategies.

Key Applications

Interpret3C (Interpretable Conditional Computation Clustering)

Context: Massive Open Online Courses (MOOCs) with over 5,000 students

Implementation: Developed a clustering pipeline that utilizes interpretable neural networks for individualized feature selection and applies clustering on the selected features.

Outcomes: Identified six distinct behavioral clusters among students, facilitating personalized interventions and insights into student engagement and performance.

Challenges: High-dimensional data poses challenges in interpretability and robustness; traditional clustering methods may overlook individual differences in feature importance.

Implementation Barriers

Technical

The curse of dimensionality affects clustering performance and interpretability due to the sparsity of data in high-dimensional spaces.

Proposed Solutions: Utilizing interpretable neural networks for individual feature selection to enhance robustness and clarity of clustering outputs.

Bias

Reliance on expert-selected features can introduce subjective biases and may not represent the full spectrum of student behaviors.

Proposed Solutions: Implementing data-driven feature selection methods that account for individual differences in feature importance.

Project Team

Isadora Salles

Researcher

Paola Mejia-Domenzain

Researcher

Vinitra Swamy

Researcher

Julian Blackwell

Researcher

Tanja Käser

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Isadora Salles, Paola Mejia-Domenzain, Vinitra Swamy, Julian Blackwell, Tanja Käser

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies