Skip to main content Skip to navigation

Generative AI Misuse Potential in Cyber Security Education: A Case Study of a UK Degree Program

Project Overview

The document explores the role of generative AI, specifically large language models (LLMs) like ChatGPT, in higher education, particularly within a Master's-level cyber security program. It addresses the challenges posed by these technologies, notably the potential for academic dishonesty and misuse, while underscoring the critical importance of maintaining academic integrity. The findings suggest that assessment design must evolve to counteract the risks associated with LLMs, advocating for innovative approaches such as LLM-resistant assessments and the implementation of detection tools to uphold educational standards. Ultimately, the study calls for a balanced approach that not only mitigates risks but also equips students with the necessary skills to navigate real-world challenges, highlighting the need for careful consideration of how generative AI can be integrated into educational frameworks responsibly and effectively.

Key Applications

Assessment design and evaluation in a Master's-level cyber security program

Context: Higher education, specifically a Master's-level program with a significant number of international students

Implementation: A quantitative assessment framework was applied to evaluate the susceptibility of assessments to LLM misuse, taking into account factors like assessment type and delivery mode.

Outcomes: Identification of high exposure to misuse, particularly in report-based assessments; recommendations for LLM-resistant assessments.

Challenges: Risk of superficial knowledge acquisition and skills deficit due to reliance on LLMs; maintaining academic integrity in assessments.

Implementation Barriers

Academic Integrity and Assessment Design

Risk of students using LLMs to complete assignments, leading to a lack of genuine competency and skills. Traditional assessment formats like take-home reports and project-based work increase susceptibility to LLM misuse.

Proposed Solutions: Adoption of LLM-resistant assessments, fostering an ethical learning environment, utilizing more invigilated assessments (such as oral exams and in-class tests), and designing tasks that reflect real-world challenges.

Detection Reliability

Current LLM detection tools have issues with false positives and negatives, making them unreliable.

Proposed Solutions: Further research into the effectiveness of detection methods and ensuring ethical deployment of such tools.

Project Team

Carlton Shepherd

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Carlton Shepherd

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies