Contra generative AI detection in higher education assessments
Project Overview
The document explores the integration of generative AI in education, focusing on its applications and the challenges posed by reliance on AI detection tools in higher education assessments. It highlights the misalignment of these detection mechanisms with the rapidly evolving educational landscape, emphasizing that they may not adequately address the complexities introduced by generative AI. The author argues for a transition towards more robust assessment methods and educational policies that embrace generative AI while upholding academic integrity. Key findings include the vulnerabilities of current detection tools and the necessity for critical and ethical engagement with AI technologies in educational contexts. Ultimately, the document advocates for adapting educational practices to effectively incorporate generative AI, ensuring that its benefits are realized while maintaining standards of academic honesty.
Key Applications
AI detection tools and teaching resources for generative AI
Context: Higher education assessments targeting students and educators, including curricula that incorporate the responsible use of generative AI.
Implementation: The implementation includes AI detection tools like Turnitin and CopyLeaks as part of academic integrity measures, along with educational objectives that incorporate the ethical use of generative AI in curricula and assessment techniques.
Outcomes: Students learn to use AI responsibly, which enhances their learning experience while maintaining integrity. The tools aim to uphold academic integrity but show varying accuracy, leading to potential false positives that can impact students' academic records.
Challenges: There is a reliance on AI detection tools that can undermine educational practices and lead to wrongful accusations of malpractice. Additionally, educators must ensure students distinguish between compliant and non-compliant uses of AI.
Implementation Barriers
Technological barrier
AI detection tools are not sufficiently accurate, vulnerable to manipulation, and may disadvantage non-native English speakers due to biases in detection algorithms.
Proposed Solutions: Develop more robust assessment methods that do not rely on detection tools and conduct strong statistical analyses of detection tools for bias before acceptance.
Ethical barrier
Policies associating generative AI with academic malpractice can create distrust and anxiety among students.
Proposed Solutions: Promote responsible AI usage and create policies that distinguish between permissible and impermissible uses of AI.
Cultural barrier
Fear of wrongful accusations can discourage students from utilizing AI to enhance their learning.
Proposed Solutions: Foster an environment of trust and transparency in the use of AI in education.
Project Team
Cesare G. Ardito
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Cesare G. Ardito
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18