Skip to main content Skip to navigation

From Intuition to Understanding: Using AI Peers to Overcome Physics Misconceptions

Project Overview

The document explores the application of generative AI in education, specifically through the introduction of an AI Peer aimed at helping students address misconceptions in physics. A study revealed that students engaging with the AI Peer significantly improved their comprehension of Newtonian mechanics, as evidenced by higher post-test scores compared to those in a control group. The design of the AI Peer as a fallible peer rather than a traditional authoritative tutor fostered a more interactive and personalized learning experience. Despite these positive outcomes, the document raises important concerns regarding the accuracy of AI-generated responses and emphasizes the necessity for students to develop digital literacy skills to effectively evaluate AI outputs. Overall, the findings suggest that while generative AI can enhance learning in specific subjects like physics, careful consideration of its limitations and the need for critical assessment skills is essential for maximizing its educational benefits.

Key Applications

AI Peer

Context: Undergraduate physics students in an introductory course

Implementation: Students interacted with an AI designed to correct misconceptions from a pre-test on Newtonian mechanics. The AI was presented as a peer that could answer questions incorrectly.

Outcomes: The treatment group showed a statistically significant improvement in post-test scores, with an average increase of 10.5 percentage points compared to the control group. Qualitatively, 91% of AI interactions were rated as helpful by students.

Challenges: The AI answered up to 40% of questions incorrectly, and there were issues with the AI failing to address specific misconceptions adequately.

Implementation Barriers

Accuracy

Generative AI can provide inaccurate answers and hallucinate information, which may mislead students.

Proposed Solutions: Positioning the AI as a peer rather than an authoritative source, encouraging critical evaluation of AI responses.

Digital Literacy

Students may lack the necessary skills to critically assess AI-generated content, leading to potential misunderstandings.

Proposed Solutions: Teaching AI literacy to help students recognize and evaluate potential errors in AI outputs.

Project Team

Ruben Weijers

Researcher

Denton Wu

Researcher

Hannah Betts

Researcher

Tamara Jacod

Researcher

Yuxiang Guan

Researcher

Vidya Sujaya

Researcher

Kushal Dev

Researcher

Toshali Goel

Researcher

William Delooze

Researcher

Reihaneh Rabbany

Researcher

Ying Wu

Researcher

Jean-François Godbout

Researcher

Kellin Pelrine

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Ruben Weijers, Denton Wu, Hannah Betts, Tamara Jacod, Yuxiang Guan, Vidya Sujaya, Kushal Dev, Toshali Goel, William Delooze, Reihaneh Rabbany, Ying Wu, Jean-François Godbout, Kellin Pelrine

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies