Skip to main content Skip to navigation

Auto-assessment of assessment: A conceptual framework towards fulfilling the policy gaps in academic assessment practices

Project Overview

The document examines the transformative role of Generative Artificial Intelligence (GAI) in education, particularly within assessment practices, highlighting a divide among academics regarding its use. Some advocate for a complete ban due to misuse concerns, while others see potential benefits if governed by suitable policies. A survey of 117 academics indicates a generally positive view of GAI's capacity to enhance autonomy in assessments, though it reveals significant gaps in awareness and existing policies related to AI in education. To address these issues, the document introduces a proposed framework for an AI-driven autonomous assessment system designed to improve grading efficiency and tackle integrity challenges. Overall, the findings suggest that while GAI has the potential to significantly enhance educational practices, careful consideration and regulatory measures are necessary to navigate its implementation effectively.

Key Applications

AI-based autonomous assessment framework

Context: Higher education, targeting academic staff and students

Implementation: Students submit their work to an AI model that generates multiple-choice questions based on the content for assessment.

Outcomes: Increased efficiency in grading and personalized feedback for students, with 71.79% of academics supporting its use.

Challenges: Concerns about the potential for misuse of AI tools by students and the need for clear policies to govern AI use in assessments.

Implementation Barriers

Policy Barrier

Lack of consistent policies regarding AI use in assessments across institutions leading to confusion and inconsistency.

Proposed Solutions: Development of clear, consistent policies that involve collaboration among institutions and address the specificities of AI use in assessments.

Awareness Barrier

Insufficient awareness among academics and students regarding AI tools and their implications for academic integrity.

Proposed Solutions: Training programs for both staff and students on the ethical use and implications of AI tools in education.

Technical Barrier

Difficulty in detecting AI-generated content and ensuring the originality of student submissions.

Proposed Solutions: Development and implementation of detection tools and strategies to enhance academic integrity.

Project Team

Wasiq Khan

Researcher

Luke K. Topham

Researcher

Peter Atherton

Researcher

Raghad Al-Shabandar

Researcher

Hoshang Kolivand

Researcher

Iftikhar Khan

Researcher

Abir Hussain

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Wasiq Khan, Luke K. Topham, Peter Atherton, Raghad Al-Shabandar, Hoshang Kolivand, Iftikhar Khan, Abir Hussain

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies