Skip to main content Skip to navigation

Beyond Answers: Large Language Model-Powered Tutoring System in Physics Education for Deep Learning and Precise Understanding

Project Overview

This document focuses on the application of generative AI, specifically large language models (LLMs), in education. It presents Physics-STAR, an LLM-powered tutoring system designed to personalize physics learning for high school students. Physics-STAR utilizes a Situation-Task-Action-Result (STAR) framework to guide the LLM, enabling knowledge explanation, error analysis, and review suggestions. The study, involving high school sophomores, demonstrated that Physics-STAR significantly improved student performance and learning efficiency, particularly on complex, information-rich questions. The paper underscores the potential of LLMs to enhance education through personalized learning experiences, while also stressing the importance of student-centered pedagogical practices, effective error analysis, and the need to balance AI integration with the acquisition of fundamental skills.

Key Applications

Physics-STAR (LLM-powered tutoring system)

Context: High school physics education, specifically focusing on gravity and spaceflight concepts.

Implementation: Physics-STAR uses a framework to guide LLMs through three steps: knowledge explanation, error analysis, and review suggestion. The system was tested in a controlled experiment with three groups: teacher lecture, general LLM tutoring, and Physics-STAR LLM tutoring. The Physics-STAR LLM tutoring used the GPT-4o model.

Outcomes: Physics-STAR increased students' average scores and efficiency on conceptual, computational, and informational questions. Students' average scores on complex information problems increased by 100% and their efficiency increased by 5.95%.

Challenges: The paper focuses on the potential of the tool and the outcomes of the trial. No specific challenges of the tool are mentioned.

Implementation Barriers

Challenge in current LLM-powered tutoring systems

Current LLM-assisted systems often fall short of specific requirements in physics education, leading to seemingly correct but wrong guidance: misinterpretation of concepts, erroneous calculation, and incorrectly extracting information from diagrams. LLM-powered tutoring systems often provide direct answer retrieval without offering an in-depth analysis of the causes of errors. Students may focus more on obtaining the correct results rather than understanding the underlying processes and theories.

Proposed Solutions: The Physics-STAR framework, which utilizes prompt engineering to generate LLM inputs tailored for personalized physics learning environments. LLM-powered tutoring systems must evolve to incorporate features that emphasize error analysis and critical thinking. For instance, instead of simply presenting the correct answer, AI could guide students through a step-by-step breakdown of their errors, highlighting misconceptions and providing explanations that reinforce conceptual understanding. AI educational tools are designed to reinforce the basics rather than replace them. AI should be used to augment Teacher lecture methods by providing supplementary practice that focuses on fundamental skills and concepts. AI tools should encourage students to manually work through problems before turning to automated solutions, ensuring that they engage deeply with the material and develop a robust foundational understanding.

Project Team

Zhoumingju Jiang

Researcher

Mengjun Jiang

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Zhoumingju Jiang, Mengjun Jiang

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gemini-2.0-flash-lite