Skip to main content Skip to navigation

Student Mastery or AI Deception? Analyzing ChatGPT's Assessment Proficiency and Evaluating Detection Strategies

Project Overview

The document explores the role of generative AI, notably ChatGPT, in education, focusing on its application in computer science courses. It emphasizes ChatGPT's ability to complete assignments with a high degree of accuracy, which has sparked concerns regarding academic integrity and the necessity for enhanced detection methods of AI-generated content. The research evaluates ChatGPT's performance across three specific courses—CS1, CS2, and databases—while also assessing the effectiveness of current plagiarism and AI detection tools. The findings indicate significant challenges in accurately differentiating between student submissions and AI-generated code, highlighting a critical area for future development in educational integrity and technological adaptation. Overall, the document underscores the dual potential of generative AI as both a valuable educational tool and a source of ethical dilemmas that institutions must navigate.

Key Applications

ChatGPT

Context: Educational context involving CS1, CS2, and Database courses; target audience includes computer science students.

Implementation: Students used ChatGPT to generate solutions for programming assignments across the courses.

Outcomes: ChatGPT achieved high accuracy in completing assignments, raising concerns about academic dishonesty and the integrity of assessments.

Challenges: Difficulty in detecting AI-generated content due to the dynamic nature of responses from AI and the limitations of current detection tools.

Implementation Barriers

Detection Limitations

Existing plagiarism detection tools struggle to identify AI-generated code as they are primarily designed for static code comparison.

Proposed Solutions: Improving detection methods by combining multiple approaches, utilizing heuristics, and incorporating instructor insights into detection strategies.

Academic Integrity Concerns

The potential for students to misuse AI tools for completing assessments undermines the learning process.

Proposed Solutions: Adapting assessments to include more open-ended or complex questions that are less prone to AI completion.

Project Team

Kevin Wang

Researcher

Seth Akins

Researcher

Abdallah Mohammed

Researcher

Ramon Lawrence

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Kevin Wang, Seth Akins, Abdallah Mohammed, Ramon Lawrence

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies