Skip to main content Skip to navigation

Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs

Project Overview

The document explores the application of generative AI, particularly large language models (LLMs), in education, focusing on their role in enhancing personalized learning through AI-powered tutoring chatbots. It emphasizes the development of a new method named dialogueKT, which leverages LLMs to evaluate student responses, track knowledge acquisition, and forecast answer accuracy, showing notable advancements compared to conventional knowledge tracing techniques. The findings indicate that LLMs can significantly improve tutoring effectiveness by providing tailored feedback and insights into student understanding. However, the document also addresses challenges such as the complexities of open-ended dialogue interactions and the limitations of current datasets, which may affect the implementation and scalability of these AI-driven educational tools. Overall, the integration of generative AI in education presents promising opportunities for personalized learning, while also highlighting the need for addressing existing challenges to fully realize its potential.

Key Applications

AI-powered tutoring and knowledge tracing systems using LLMs

Context: These systems analyze tutor-student dialogues to provide personalized education, estimate student knowledge, and predict response correctness. They aim to enhance learning experiences for students by making high-quality tutoring scalable and adaptable to individual needs.

Implementation: Large Language Models (LLMs) are utilized to power tutoring chatbots and analyze dialogues. They are trained through fine-tuning and prompt engineering to follow pedagogical strategies. The systems implement a two-step process where dialogues are annotated with correctness and knowledge component tags, enabling mastery learning through knowledge tracing methods.

Outcomes: These implementations lead to improved access to high-quality education, effective tutoring strategies, significant improvements in predicting student response correctness, and enhanced tracking of student knowledge through dialogue analysis.

Challenges: Challenges include the existing LLMs' focus on tutor turns which complicates the analysis of student responses, the limitations of short dialogues that restrict historical context for knowledge estimation, and the complexity introduced by multiple knowledge components per dialogue turn.

Implementation Barriers

Technical Barrier

Challenges in handling the open-ended nature of dialogues and the noise in student responses.

Proposed Solutions: Developing more sophisticated models that can effectively parse and analyze student discourse.

Data Barrier

Limited availability of large-scale tutoring dialogue datasets linked over multiple interactions.

Proposed Solutions: Encouraging researchers to release and utilize larger datasets for training and evaluating KT methods.

Project Team

Alexander Scarlatos

Researcher

Ryan S. Baker

Researcher

Andrew Lan

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Alexander Scarlatos, Ryan S. Baker, Andrew Lan

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies