Skip to main content Skip to navigation

Understanding Student and Academic Staff Perceptions of AI Use in Assessment and Feedback

Project Overview

The document examines the integration of Generative AI (GenAI) in higher education, highlighting the perceptions of students and academic staff regarding its role in assessment and feedback processes. It finds that familiarity with GenAI is generally low among participants, accompanied by skepticism about its effectiveness in marking assessments. While there is a cautious acceptance of GenAI for purposes such as knowledge checking and analyzing student participation, significant concerns remain regarding academic integrity and the reliability of AI text detection tools. These mixed feelings underscore the ongoing debate about the potential benefits and risks associated with GenAI in educational contexts, suggesting that while there is some openness to its use, substantial reservations about its implications for assessment practices and the authenticity of student work persist. Overall, the findings illustrate a complex landscape where the promise of GenAI in enhancing educational experiences is tempered by apprehensions about its impact on academic standards.

Key Applications

GenAI tools for assessment and feedback, including text detection

Context: Higher education institutions in Vietnam and Singapore, targeting students and academic staff. These tools are employed to mark assessments, provide feedback, and detect AI-generated content in student submissions.

Implementation: Utilization of GenAI tools for marking and feedback combined with AI text detection methodologies to evaluate originality and authorship in student work. This involves surveying perceptions of GenAI use in assessment and feedback through online questionnaires, as well as applying text detection tools to assess student submissions.

Outcomes: Identified skepticism towards AI-only marking but greater acceptance when AI feedback is combined with human input. General comfort with using text detection tools was reported; however, concerns about false positives and negatives led to negative consequences for students. The potential for AI to assist in knowledge checking and participation monitoring was also revealed.

Challenges: Concerns about academic integrity, variable effectiveness of AI text detection tools, inaccuracies leading to mistrust and confusion among students and staff, and confusion over institutional policies regarding GenAI use.

Implementation Barriers

Technical Barrier

Inaccuracy of AI text detection tools leading to false positives and negatives

Proposed Solutions: Improving the accuracy of AI detection algorithms and developing clearer guidelines for their use

Understanding Barrier

Lack of clarity about how GenAI and text detection technologies work, leading to confusion among students and staff

Proposed Solutions: Investing in training programs to enhance understanding and confidence in AI tools in assessment

Policy Barrier

Confusion regarding institutional policies on the use of GenAI in assessments

Proposed Solutions: Establishing clear, coherent frameworks and guidelines to guide the use of AI in higher education

Project Team

Jasper Roe

Researcher

Mike Perkins

Researcher

Daniel Ruelle

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Jasper Roe, Mike Perkins, Daniel Ruelle

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies