Skip to main content Skip to navigation

Using Large Language Models to Provide Explanatory Feedback to Human Tutors

Project Overview

The document explores the application of generative AI, specifically Large Language Models (LLMs), in enhancing educational practices through real-time feedback for novice tutors. It highlights the significance of immediate, explanatory feedback in fostering effective learning, particularly during online lessons focused on delivering praise. The research identifies two key methodologies: binary classification to distinguish between effective and ineffective praise responses, and named entity recognition (NER) to facilitate the generation of tailored feedback for tutors. By leveraging these AI-driven approaches, the study aims to improve tutor performance and support their development, ultimately contributing to more effective teaching strategies and better learning outcomes for students. The findings illustrate how generative AI can serve as a valuable tool in education, aiding in the refinement of tutoring skills and enhancing the overall educational experience.

Key Applications

Real-time explanatory feedback for novice tutors

Context: Online lessons for novice tutors, including community volunteers and college students.

Implementation: Utilizing LLMs for binary classification of tutor responses (effective vs. ineffective praise) and NER for generating explanatory feedback.

Outcomes: Improved accuracy in identifying effective praise types, enhanced tutor learning experiences, and the potential for real-time feedback that supports novice tutors in their professional development.

Challenges: Limited dataset for training models, difficulty in capturing nuanced tutor responses, and the need for accurate labeling of tutor praise types.

Implementation Barriers

Technical Barrier

The model's performance is hindered by insufficient training data and the complexity of tutor responses.

Proposed Solutions: Data augmentation techniques and collecting more real-world data to improve model training.

Implementation Barrier

Difficulty in managing low-confidence predictions from the model, which could undermine user trust.

Proposed Solutions: Designing feedback interfaces that transparently communicate the model's confidence levels and allowing tutors to engage critically with the feedback.

Project Team

Jionghao Lin

Researcher

Danielle R. Thomas

Researcher

Feifei Han

Researcher

Shivang Gupta

Researcher

Wei Tan

Researcher

Ngoc Dang Nguyen

Researcher

Kenneth R. Koedinger

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Jionghao Lin, Danielle R. Thomas, Feifei Han, Shivang Gupta, Wei Tan, Ngoc Dang Nguyen, Kenneth R. Koedinger

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies