Skip to main content Skip to navigation

Machine vs Machine: Using AI to Tackle Generative AI Threats in Assessment

Project Overview

The document explores the impact of generative AI on higher education, focusing on its implications for academic assessments. It highlights the challenges posed by AI-generated content and presents a dual strategy framework that merges static analysis and dynamic testing to safeguard assessment integrity. The framework outlines eight elements of static analysis designed to assess vulnerability to AI influence and underscores the necessity for continuous adaptation in assessment design to align with advancing AI technologies. This approach seeks to strike a balance between maintaining security and enhancing educational effectiveness, while also addressing the ethical and practical challenges that arise from the integration of AI in assessment practices. Ultimately, the document emphasizes the need for educators to rethink assessment strategies to ensure they remain relevant and effective in an era increasingly shaped by generative AI.

Key Applications

Machine vs Machine: Using AI to Tackle Generative AI Threats in Assessment

Context: Higher education assessment targeting students and educators

Implementation: A dual strategy combining static analysis and dynamic testing to evaluate vulnerability in assessments

Outcomes: Improved assessment integrity, enhanced pedagogical effectiveness, and a comprehensive understanding of vulnerabilities to generative AI.

Challenges: Dynamic and evolving capabilities of AI, potential biases in detection methods, and the need for continuous adaptation of assessment practices.

Implementation Barriers

Technological

Rapidly evolving capabilities of generative AI tools that can adapt and circumvent existing assessment frameworks.

Proposed Solutions: Implementing ongoing evaluation and adaptation of assessment designs, utilizing a dual strategy of static and dynamic approaches.

Equity

Resource accessibility concerns that may disadvantage students with limited access to up-to-date or specialized resources.

Proposed Solutions: Integrating universal design principles that accommodate diverse student needs and ensure equitable access to assessment materials.

Fairness

Bias in detection tools, particularly against non-native English speakers, raising concerns about fairness in assessment outcomes.

Proposed Solutions: Developing more accurate and equitable assessment frameworks that consider diverse student backgrounds and capabilities.

Project Team

Mohammad Saleh Torkestani

Researcher

Taha Mansouri

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Mohammad Saleh Torkestani, Taha Mansouri

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies