Automatic Large Language Models Creation of Interactive Learning Lessons
Project Overview
The document explores the incorporation of large language models (LLMs) in education, specifically for generating interactive, scenario-based lessons aimed at training novice tutors in middle school mathematics. It highlights a task decomposition prompting strategy that significantly improves the quality of these AI-generated lessons. A comparison between LLM-generated lessons and those crafted by humans reveals notable advantages, including substantial time savings and a variety of scenarios, which can enhance the training experience. However, the study also points out limitations such as the tendency for AI to provide generic feedback and issues with clarity in the generated content. The findings affirm the promising role of generative AI in scaling tutor training programs, demonstrating its potential to augment educational practices while simultaneously underscoring the importance of human oversight and refinement to ensure quality and effectiveness.
Key Applications
Automatic generation of interactive, scenario-based lessons for tutor training.
Context: Training novice tutors who teach middle school mathematics online.
Implementation: Using a Retrieval-Augmented Generation approach with GPT-4o to create structured lessons in multiple segments.
Outcomes: Enhanced lesson quality through task decomposition, effective training content that meets pedagogical needs, and time savings in lesson design.
Challenges: Generic feedback provided by AI, lack of clarity in instructional sections, and authenticity issues with citations.
Implementation Barriers
Content Quality
LLM-generated feedback often lacks specificity, providing generic responses rather than targeted feedback for incorrect answers.
Proposed Solutions: Incorporate more nuanced explanations for all answer options and enhance the instructional content with clearer guidelines.
Clarity and Structure
Inconsistencies in terminology and lack of logical flow in generated lessons can confuse novice tutors.
Proposed Solutions: Refine prompting strategies to improve coherence and clarity in generated lesson content.
Authenticity
Issues with the authenticity of academic references generated by LLMs.
Proposed Solutions: Require manual verification of citations and references by human lesson designers.
Project Team
Jionghao Lin
Researcher
Jiarui Rao
Researcher
Yiyang Zhao
Researcher
Yuting Wang
Researcher
Ashish Gurung
Researcher
Amanda Barany
Researcher
Jaclyn Ocumpaugh
Researcher
Ryan S. Baker
Researcher
Kenneth R. Koedinger
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Jionghao Lin, Jiarui Rao, Yiyang Zhao, Yuting Wang, Ashish Gurung, Amanda Barany, Jaclyn Ocumpaugh, Ryan S. Baker, Kenneth R. Koedinger
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai