Skip to main content Skip to navigation

Scalable and Equitable Math Problem Solving Strategy Prediction in Big Educational Data

Project Overview

The document explores the application of generative AI in education, particularly focusing on predicting math problem-solving strategies through a scalable and equitable approach. It highlights the integration of Intelligent Tutoring Systems (ITSs) and Adaptive Instructional Systems (AISs) that employ machine learning techniques, including deep neural networks (DNNs) and non-parametric clustering, to analyze student data. This methodology aims to accurately identify and predict diverse problem-solving strategies tailored to students with varying skill levels, emphasizing the importance of minimizing bias and ensuring equitable treatment among different learner groups. The findings suggest that generative AI can enhance personalized learning experiences by providing valuable insights into individual learning paths, ultimately fostering improved educational outcomes and accessibility in math education.

Key Applications

Intelligent Tutoring Systems (ITS) and Adaptive Instructional Systems (AIS)

Context: Educational technology for middle school students using online math learning platforms

Implementation: Utilized machine learning techniques to analyze student interactions and predict problem-solving strategies through embeddings and clustering methods.

Outcomes: Achieved high accuracy in predicting math strategies with a small sample of data and ensured predictive equality across diverse student skill levels.

Challenges: Complexity in identifying and clustering symmetric strategies and ensuring fairness in predictions across different skill levels.

Implementation Barriers

Technical Complexity

Identifying and clustering different problem-solving strategies in a scalable manner is complex due to the diversity of strategies students use.

Proposed Solutions: Developing a non-parametric clustering method that identifies approximately symmetric strategies and embedding models to represent student mastery.

Bias and Fairness

Deep neural networks may produce biased results if trained on imbalanced datasets, leading to disparate treatment of different student groups.

Proposed Solutions: Using sampling techniques to ensure representation of all skill levels and refining the clustering method to improve fairness in predictions.

Project Team

Anup Shakya

Researcher

Vasile Rus

Researcher

Deepak Venugopal

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Anup Shakya, Vasile Rus, Deepak Venugopal

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies