Skip to main content Skip to navigation

Contra generative AI detection in higher education assessments

Project Overview

The document explores the role of generative AI in education, particularly its integration into higher education rather than its detection through ineffective tools. It critically addresses the challenges posed by generative AI detection methods, emphasizing their limitations and the ethical concerns surrounding academic integrity. The authors propose a paradigm shift towards embracing generative AI as a valuable educational resource, advocating for the creation of robust and authentic assessment strategies that leverage generative AI while ensuring the integrity and authenticity of evaluations. By focusing on constructive applications of generative AI, the document highlights the potential for enhanced learning experiences and outcomes, calling for innovative approaches that incorporate this technology into educational frameworks effectively.

Key Applications

AI detection tools like Turnitin, GptZero, and CopyLeaks

Context: Higher education assessments targeting educators and students

Implementation: Integration of AI detection tools into academic integrity policies

Outcomes: Attempted preservation of academic integrity, but with reported inaccuracies and biases

Challenges: Detection tools are vulnerable to manipulation and do not effectively distinguish between AI-generated and human-written content

Implementation Barriers

Technological barrier

AI detection tools show varying performance and are susceptible to manipulation techniques, such as paraphrasing.

Proposed Solutions: Development of more robust assessment methods that do not rely on AI detection.

Ethical and Social barrier

The reliance on AI detection tools creates a culture of mistrust and can lead to wrongful accusations of academic malpractice. Additionally, policies associating generative AI with academic malpractice disproportionately affect certain student groups, including non-native English speakers and international students.

Proposed Solutions: Encouraging transparency in AI usage, teaching students responsible AI practices, and creating equitable policies that promote responsible AI usage without penalizing students.

Project Team

Cesare G. Ardito

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Cesare G. Ardito

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies