Skip to main content Skip to navigation

Assessing AI Detectors in Identifying AI-Generated Code: Implications for Education

Project Overview

The document examines the role of generative AI, particularly Large Language Models (LLMs) like ChatGPT, in programming education and emphasizes the challenges of maintaining academic integrity amidst their use. It highlights the dual nature of AI in the classroom, where while these tools can enhance learning experiences, they also pose risks of misuse by students. A significant focus is placed on the effectiveness of AI-generated content (AIGC) detectors, revealing that current detection mechanisms often fail to reliably differentiate between human-created and AI-generated code. This raises important questions about the integrity of educational assessments and the ability of educators to ensure fair evaluation practices. The findings suggest an urgent need for more robust detection solutions to mitigate potential academic dishonesty while also exploring the positive applications of AI in facilitating learning and programming skills development. Overall, the document underscores the necessity for educators to adapt to the evolving landscape of AI in education, balancing the benefits of technology with the imperative to uphold standards of academic integrity.

Key Applications

Assessment of AIGC Detectors for AI-generated code

Context: Programming education, targeting educators and students in Software Engineering and Computer Science courses.

Implementation: An empirical study was conducted using a dataset of 5,069 code samples, where various AIGC detectors were evaluated for their effectiveness in identifying AI-generated code.

Outcomes: Findings indicate that existing AIGC detectors perform poorly in distinguishing between human-written and AI-generated code, highlighting significant limitations and the need for improved detection tools.

Challenges: The main challenge is the detectors' inability to accurately identify AI-generated code, leading to high rates of false positives and negatives. This raises concerns about the integrity of academic assessments.

Implementation Barriers

Technical Barrier

Existing AIGC detectors struggle with the distinct syntax and structure of programming languages, leading to ineffective detection of AI-generated code.

Proposed Solutions: Ongoing research is needed to enhance the accuracy and reliability of AIGC detectors, specifically tailored for programming code.

Ethical Barrier

The reliance on AIGC detectors raises concerns about academic integrity and the potential for students to deceive these systems.

Proposed Solutions: Educators should develop guidelines and policies for responsible AI use in education to mitigate risks associated with academic dishonesty.

Project Team

Wei Hung Pan

Researcher

Ming Jie Chok

Researcher

Jonathan Leong Shan Wong

Researcher

Yung Xin Shin

Researcher

Yeong Shian Poon

Researcher

Zhou Yang

Researcher

Chun Yong Chong

Researcher

David Lo

Researcher

Mei Kuan Lim

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Wei Hung Pan, Ming Jie Chok, Jonathan Leong Shan Wong, Yung Xin Shin, Yeong Shian Poon, Zhou Yang, Chun Yong Chong, David Lo, Mei Kuan Lim

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies