Skip to main content Skip to navigation

I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education

Project Overview

The document examines the transformative role of Large Language Models (LLMs), particularly ChatGPT, in higher education, focusing on their influence on trust dynamics between lecturers and students. As students increasingly adopt LLMs for various academic tasks, including assignment development and coding, concerns regarding academic integrity and the authenticity of student submissions have emerged. The study underscores the necessity for transparency in the use of LLMs, positing that such openness can build trust and promote collaborative learning and improved research outcomes. Additionally, the document offers guidelines for the ethical implementation of LLMs in educational contexts, aiming to balance innovation with integrity and to harness the potential of generative AI while mitigating risks associated with misuse.

Key Applications

Large Language Models (LLMs) such as ChatGPT, Google Bard, and Meta’s LLaMA

Context: Higher education, specifically among university students and lecturers

Implementation: Students utilize LLMs for tasks like assignments and coding; the study collected data through a quantitative survey among lecturers to assess trust dynamics.

Outcomes: Increased engagement and potential for enhanced team performance; lecturers showed acceptance of LLM use if transparency is maintained.

Challenges: Concerns about academic integrity and the inability to distinguish between student-generated and LLM-generated content; challenges in establishing clear guidelines for ethical use.

Implementation Barriers

Ethical/Integrity and Assessment Barrier

Concerns regarding academic integrity as students may submit LLM-generated content as their own work. Lecturers face challenges in assessing the authenticity of student work due to LLM use.

Proposed Solutions: Establishing clear guidelines for LLM usage that promote transparency and ethical practices in student work. Promoting the transparent use of LLMs by students to improve trust and collaboration.

Project Team

Simon Kloker

Researcher

Matthew Bazanya

Researcher

Twaha Kateete

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Simon Kloker, Matthew Bazanya, Twaha Kateete

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies