Skip to main content Skip to navigation

The Impact of AI in Physics Education: A Comprehensive Review from GCSE to University Levels

Project Overview

The document examines the role of generative AI, particularly Large Language Models (LLMs), in physics education across various academic levels, from GCSE to university. It assesses the performance of these AI models on physics exam questions, highlighting their strength in answering simpler queries while noting a decline in accuracy with more complex material. The findings underscore the importance of caution when utilizing AI-generated content, as LLMs can produce answers that sound credible yet are incorrect, especially in advanced topics. Consequently, the document recommends that educators maintain transparency regarding the limitations of AI, exercise caution to avoid overreliance, and revise assessment strategies to address the risks posed by non-invigilated, AI-assisted tasks. Overall, while generative AI holds potential in educational settings, its application must be approached thoughtfully to ensure effective learning outcomes.

Key Applications

Assessment of LLMs on Physics exam questions

Context: Physics education at GCSE, A-Level, and introductory university levels

Implementation: LLMs were tested on 1337 exam questions utilizing various prompting techniques including Zero Shot, In Context Learning, and Confirmatory Checking.

Outcomes: Average scores were 83.4% for GCSE, 63.8% for A-Level, and 37.4% for university-level questions. LLMs showed proficiency in writing physics essays and coding.

Challenges: LLMs struggled with advanced questions, complex calculations, and marking accuracy, with an average concordance of 50.8% with human markers.

Implementation Barriers

Technical Limitations

LLMs struggle with complex calculations, especially as numerical complexity increases.

Proposed Solutions: Educators should use AI for simpler tasks and not rely on it for complex mathematics.

Misalignment of Expectations

LLMs generate plausible but often incorrect responses, leading to potential overreliance by students.

Proposed Solutions: Educators should discuss AI capabilities and limitations transparently with students.

Assessment Integrity

Non-invigilated assessments are vulnerable to automated completion by LLMs, which can undermine academic honesty.

Proposed Solutions: Change assessment methods to reduce non-invigilated tasks that can be easily completed by AI.

Project Team

Will Yeadon

Researcher

Tom Hardy

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Will Yeadon, Tom Hardy

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies