Skip to main content Skip to navigation

Performance of ChatGPT on the US Fundamentals of Engineering Exam: Comprehensive Assessment of Proficiency and Potential Implications for Professional Environmental Engineering Practice

Project Overview

The document assesses the capabilities of ChatGPT, a generative AI model, in the context of the Fundamentals of Engineering (FE) Environmental Exam, revealing its potential applications in education, especially within engineering disciplines. The study demonstrates that minor adjustments to prompts can substantially enhance ChatGPT's accuracy, showcasing the model's strengths in certain exam sections while also highlighting challenges such as its mathematical precision and issues related to exam integrity. These findings indicate that generative AI can significantly improve study habits and exam readiness, although they emphasize the need for careful implementation to address potential risks. Ultimately, the research suggests that while generative AI holds promise for educational advancement, it requires thoughtful integration to ensure effective and ethical use in academic settings.

Key Applications

ChatGPT for FE Environmental Exam preparation

Context: Undergraduate engineering students preparing for the FE Environmental Exam

Implementation: ChatGPT was tested on exam questions, with a focus on modifying prompts to enhance accuracy.

Outcomes: Achieved an overall accuracy of 66.42%, improved to 75.37% with refined prompts; demonstrated potential as a study tool.

Challenges: Limited performance in complex mathematical calculations and specific subject areas, potential for generating incorrect answers.

Implementation Barriers

Technical Limitations

ChatGPT struggles with complex mathematical problems and multi-step calculations, affecting its reliability in engineering contexts.

Proposed Solutions: Integrate computational tools and engineering software with AI to enhance problem-solving capabilities.

Integrity Risks

Concerns about cheating and lack of accountability when using AI in licensing exams.

Proposed Solutions: Develop AI-resistant exam questions and incorporate monitoring features to maintain exam integrity.

Project Team

Vinay Pursnani

Researcher

Yusuf Sermet

Researcher

Ibrahim Demir

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Vinay Pursnani, Yusuf Sermet, Ibrahim Demir

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies