The Lazy Student's Dream: ChatGPT Passing an Engineering Course on Its Own
Project Overview
The document explores the application of Large Language Models (LLMs), particularly ChatGPT, in the context of an undergraduate control systems course (AE 353) at the University of Illinois Urbana-Champaign. It assesses the performance of LLMs across various assessment types, revealing their potential to enhance learning and teaching in engineering education. The findings indicate that generative AI can effectively assist students in understanding complex concepts and provide personalized feedback, thereby fostering a more interactive learning environment. However, the study also raises critical concerns regarding academic integrity and the effectiveness of traditional assessment methods in light of AI's capabilities. By integrating AI tools into the educational framework, the document suggests that institutions can improve educational outcomes while necessitating a reevaluation of assessment strategies to maintain academic standards. Overall, it highlights a transformative opportunity for AI in education, advocating for a balanced approach that maximizes benefits while addressing ethical considerations.
Key Applications
ChatGPT used to complete coursework with minimal effort
Context: Undergraduate Aerospace Control Systems course (AE 353) students at University of Illinois Urbana-Champaign
Implementation: LLM performance evaluated through 115 course deliverables using a minimal effort protocol, including homework assignments, midterms, and programming projects.
Outcomes: ChatGPT achieved a B-grade performance (82.24%), showing strengths in structured assignments but weaknesses in open-ended projects.
Challenges: Limitations in understanding complex theoretical concepts and generating optimal programming solutions.
Implementation Barriers
Academic Integrity
Concerns about assessment integrity due to high LLM performance in coursework potentially undermining traditional evaluation methods.
Proposed Solutions: Reconsider assessment strategies to differentiate between genuine understanding and LLM capabilities.
Project Team
Gokul Puthumanaillam
Researcher
Timothy Bretl
Researcher
Melkior Ornik
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Gokul Puthumanaillam, Timothy Bretl, Melkior Ornik
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai