Skip to main content Skip to navigation

ADC's AI Explainer

Introduction to Generative Artificial Intelligence Tools

Generative Artificial Intelligence Tools (GAITs) can be used to create text or images in response to a prompt from the user, without the user having direct control over what is produced or how it is produced. GAITs include programmes such as Chat GPT, Google Gemini, Ernie Bot, and Midjourney etc.

Understanding Large Language Models (LLMs) in AI

Large language models (LLMs) like Chat GPT and Google Gemini are a form of GAIT. They use advanced statistical models to predict word relationships, enabling them to generate coherent, meaningful text. Trained on extensive text and code datasets, LLMs develop an "internal representation" of language, not a fixed source bank. This allows them to create original sentences and text formats, constantly refining their outputs for accuracy and relevance.

Unlike older machine learning tools reliant on human-labelled data, LLMs learn from unlabelled data, offering more flexibility but sometimes misaligning with human understanding. Their decision-making process is opaque, making it challenging to verify the accuracy or ethics of their responses. Consequently, LLM-generated content, particularly academic references or citations, should be critically assessed for accuracy and reliability.

The Emerging Role of Generative AI in Higher Education

At the time of writing, the impact of GAITs on higher education is far from certain, and many important questions have yet to be (asked or) answered. It seems likely, however, that GAITs are here to stay; that they have great potential to enhance our working lives; and that there are legitimate concerns about their impact on learning, assessment, and academic integrity.

Purpose of ADC’s Generative AI Policy

The purpose of this policy is to establish ground rules for the use of generative AI tools in completing assessments and critical incident questionnaires on credit-bearing and/or Advance HE accredited courses provided by the Academic Development Centre. The intention behind the policy is to encourage selective use of these tools in ways that positively impact reflective and scholarly teaching practices and learning on our courses. It is also aimed at discouraging usage that may reduce or diminish learning opportunities, or that conflicts with principles of academic integrity or university policy.

Warwick and AI

The 'Institutional Approach to the Use of Artificial Intelligence and Academic Integrity' provides a comprehensive framework for the ethical use of artificial intelligence (AI) in academic settings at the University of Warwick.

WIHEA Resources

Members of the Warwick International Higher Education Academy have compiled useful information and resources about AI in Education.

Russell Group policy

In July 2023, the Russell Group introduced principles to enhance AI literacy at their universities, aiming to responsibly integrate generative AI tools like ChatGPT into teaching and learning. These principles emphasise ethical AI use, maintaining academic integrity, and adapting teaching methods.