Skip to main content Skip to navigation

Option Tracing: Beyond Correctness Analysis in Knowledge Tracing

Project Overview

The document explores the integration of generative AI in education, particularly through advancements in knowledge tracing (KT) methods, with a focus on Option Tracing (OT). Unlike traditional KT methods that only assess binary correctness of student responses, OT enhances the analysis by predicting the specific options students select in multiple-choice questions (MCQs). This novel approach addresses the limitations of existing methods, which may overlook key insights from complete student responses. By proposing a new framework for error diagnosis, the document underscores the significance of accurately understanding student errors to provide personalized feedback. It presents a range of models and methodologies designed to improve the accuracy of predicting student responses, ultimately aiming to foster better learning outcomes. The findings indicate that leveraging generative AI for nuanced analysis of student interactions can significantly enhance educational experiences by tailoring instruction to individual needs and promoting deeper understanding.

Key Applications

Option Tracing (OT)

Context: Educational context involving large-scale datasets of student responses, particularly in intelligent tutoring systems.

Implementation: OT methods were developed to analyze the exact options students select on MCQs, extending existing KT methods like LSTM and GCN.

Outcomes: Improved ability to diagnose specific student errors, provide targeted feedback, and enhance learning outcomes through personalized instruction.

Challenges: Difficulty in clustering incorrect options into meaningful groups, low predictive performance on less frequently selected options, and challenges related to class imbalance in predictions.

Implementation Barriers

Technical Challenge and Data Quality

Existing KT methods analyze binary correctness, making it difficult to diagnose specific student errors. Many MCQs lack consistent labels on the errors underlying each incorrect option, complicating error diagnosis.

Proposed Solutions: Developing OT methods that leverage full student response data to identify underlying errors associated with incorrect options and exploring automated methods to identify errors corresponding to incorrect options, potentially reducing the need for manual labeling.

Predictive Performance

Reported low F1 scores indicate challenges in accurately predicting less frequently selected options.

Proposed Solutions: Investigating oversampling techniques or other methods to improve prediction accuracy for options that are rarely selected.

Project Team

Aritra Ghosh

Researcher

Jay Raspat

Researcher

Andrew Lan

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Aritra Ghosh, Jay Raspat, Andrew Lan

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies