Challenges and Opportunities of Moderating Usage of Large Language Models in Education
Project Overview
The document explores the integration of generative AI, particularly large language models (LLMs) like ChatGPT, in the educational context of physics, highlighting both the challenges and opportunities presented by their use. It underscores the importance of moderated engagement with these AI tools to foster critical thinking and reflection among students. To this end, two moderated applications, the Multi-Response Bot and the Hint Bot, were developed and tested in comparison to traditional search engines and unmoderated versions of ChatGPT. Findings from the testing revealed that moderated interaction with these AI tools can enhance student performance and promote deeper reflection on the material. However, the document also notes persistent challenges related to usability and the students' understanding of how to effectively engage with these AI resources. Overall, the research suggests that while generative AI has the potential to enrich educational experiences, careful moderation and support are essential to maximize its benefits.
Key Applications
Interactive Chatbot
Context: Students answering physics questions using various chatbot formats for assistance, including hints and multiple responses.
Implementation: Students interacted with various versions of chatbots designed to assist with physics questions. This included a multi-response bot providing three different answers, a hint bot offering guidance without direct answers, and a classic unmoderated bot that allowed free-form interaction. These bots served to enhance students' understanding and problem-solving skills in physics.
Outcomes: ['Increased critical thinking and reflection among students.', 'Encouraged students to reflect on their problem-solving strategies.', 'Mixed effectiveness in performance compared to traditional search engines, with the multi-response bot achieving closer results to search engines.']
Challenges: ['Limited usability and potential confusion regarding the purpose of the bot.', "Students' expectations of receiving direct answers led to frustration with the hint bot.", 'Overreliance on unmoderated bots resulted in poor performance and unreflected usage.']
Traditional Search Engine
Context: Students answering physics questions using traditional search engines to find answers.
Implementation: Students utilized traditional search engines for finding answers to physics questions, which required formulating precise queries.
Outcomes: Highest performance in answering questions among the conditions tested.
Challenges: The need for precise query formulation can be a barrier for some students.
Implementation Barriers
Usability Barrier
Moderated tools like the Multi-Response Bot and Hint Bot exhibited lower usability scores compared to traditional search engines. Students expect direct answers from the Hint Bot, leading to frustration and ineffective use of hints.
Proposed Solutions: Improving the user interface and providing clear instructions or toggles for expected outputs. Incorporating features to toggle between hint and answer modes to align with user expectations.
Overreliance Barrier
Students using the Classic Bot displayed a tendency to copy and paste answers without reflection.
Proposed Solutions: Encouraging reflection through prompts and designing interactions that require engagement with the material.
Project Team
Lars Krupp
Researcher
Steffen Steinert
Researcher
Maximilian Kiefer-Emmanouilidis
Researcher
Karina E. Avila
Researcher
Paul Lukowicz
Researcher
Jochen Kuhn
Researcher
Stefan Küchemann
Researcher
Jakob Karolus
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Lars Krupp, Steffen Steinert, Maximilian Kiefer-Emmanouilidis, Karina E. Avila, Paul Lukowicz, Jochen Kuhn, Stefan Küchemann, Jakob Karolus
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai