Skip to main content Skip to navigation

ADC AI Policy

ADC's Generative Artificial Intelligence Policy

Context

This policy applies to participants on the Academic Development Centre’s courses, including credit-bearing courses, non-credit-bearing courses, and courses accredited by Advance HE.

The purpose of this policy is to establish ground rules for the use of generative AI tools in completing assessments and critical incident questionnaires on credit-bearing and/or Advance HE accredited courses provided by the Academic Development Centre.

The intention behind the policy is (i) to encourage selective use of these tools in ways that positively impact reflective and scholarly teaching practices and learning on our courses, and (ii) to discourage usage that may reduce or diminish learning opportunities, or that conflicts with principles of academic integrity or university policy.

Policy

1. Standard: work that you submit for any assessment must not be the substantively unaltered output of a generative artificial intelligence tool.

2. Declaration: you will be required to assent to a declaration whenever you submit a piece of work for summative assessment on an ADC programme, to the effect that your use of generative AI tools is in keeping with the policy.

3. Disclosure: in line with the University’s guidance on GAITs, you will also be asked to disclose how, if at all, you have used these tools in preparing your submission. Where a GAIT has been used, participants should, after their Bibliography or Works Cited section, provide an overview of the following:

  • which GAITs have been used;
  • why GAITs have been used;
  • where a GAIT has been used;
  • where a GAIT’s output has been modified before use.

There will be a character limit for this usage disclosure, of approximately 75 words per assessment item.

4. Consultation: if you have questions or are in any doubt about whether a particular use of Generative AI would constitute a breach of this policy or the university’s regulations on AI or academic misconduct, you are encouraged to discuss it with your programme leader, whose contact details are available in your course handbook.

5. Alignment and Updates: this policy is in alignment with the University’s position on generative AI tools as set out in the Institutional Approach to the use of Artificial Intelligence and Academic Integrity document. This policy was put into effect in February 2024, and last updated in April 2024.

Introduction to Generative Artificial Intelligence Tools

Generative Artificial Intelligence Tools (GAITs) can be used to create text or images in response to a prompt from the user, without the user having direct control over what is produced or how it is produced. GAITs include programmes such as Chat GPT, Google Gemini, Ernie Bot, and Midjourney etc.

Understanding Large Language Models (LLMs) in AI

Large language models (LLMs) like Chat GPT and Google Gemini are a form of GAIT. They use advanced statistical models to predict word relationships, enabling them to generate coherent, meaningful text. Trained on extensive text and code datasets, LLMs develop an "internal representation" of language, not a fixed source bank. This allows them to create original sentences and text formats, constantly refining their outputs for accuracy and relevance.

Unlike older machine learning tools reliant on human-labelled data, LLMs learn from unlabelled data, offering more flexibility but sometimes misaligning with human understanding. Their decision-making process is opaque, making it challenging to verify the accuracy or ethics of their responses. Consequently, LLM-generated content, particularly academic references or citations, should be critically assessed for accuracy and reliability.

The Emerging Role of Generative AI in Higher Education

At the time of writing, the impact of GAITs on higher education is far from certain, and many important questions have yet to be (asked or) answered. It seems likely, however, that GAITs are here to stay; that they have great potential to enhance our working lives; and that there are legitimate concerns about their impact on learning, assessment, and academic integrity.

Purpose of ADC’s Generative AI Policy

The purpose of this policy is to establish ground rules for the use of generative AI tools in completing assessments and critical incident questionnaires on credit-bearing and/or Advance HE accredited courses provided by the Academic Development Centre. The intention behind the policy is to encourage selective use of these tools in ways that positively impact reflective and scholarly teaching practices and learning on our courses. It is also aimed at discouraging usage that may reduce or diminish learning opportunities, or that conflicts with principles of academic integrity or university policy.

University of Warwick’s Stance on AI and Academic Integrity

The institutional approach taken by Warwick is that LLMs can be used to assist students in demonstrating their own criticality and drawing comparisons, as well as providing information that helps students engage with the literature and the sources. It should not therefore replace your own criticality or theorisation, nor supersede engagement with core and recommended reading lists. It should be used in an assistive capacity, not a compositional one.

The University’s policy then offers examples of how LLMs might be of assistance:

  • Lateral thinking;
  • Alternative thought streams;
  • Supplementary and complementary thinking;
  • Data, charts, and images;
  • Getting starter explanations; a note of caution that you must fact check this information;
  • Keep interrogating AI on the questions to create own novel insights;
  • For formative learning;
  • For fun;
  • A revision tool to generate practice questions;
  • Structuring plans and line of reasoning;
  • Refining your work;
  • Supporting language skills.

The same policy makes it clear that LLMs should not be used replace educative process or supplant the autonomy of students:

  • Replacing learning. Value your stream of consciousness and being sentient;
  • Gaining an unfair advantage. This is academic misconduct;
  • Creating content for your work which you present as your own work. This is plagiarism;
  • Synthesising information. You will not be able to demonstrate your work and thinking as opposed to that artificially generated;
  • Rewriting your work or translation. It is important that you develop your own distinctive academic voice. Markers would much rather read imperfect English that is your voice than perfect English written by another human or AI. See also the Proofreading policy.

Criticality, detail and reflectivity in ADC Assessments

ADC assessments involve critically reflecting on your teaching methods. In some courses, you'll need to align your practice with the Professional Standards Framework (PSF). You must explain how your decision-making incorporates key pedagogical concepts and theories. Demonstrating care for students, aligning with personal, institutional, and broader values is essential. Your teaching approaches and decisions should be informed by subject-specific knowledge and educational research. General or abstract teaching discussions are insufficient; you must detail your experiences and justify your choices.As you can see, much of the content that is crucially important to success in our assessments is rooted in the teaching practice and detailed reflections of our course participants. This is why LLMs should only be used in a limited, assistive capacity on ADC courses. Anything that you submit as part of your formative or summative assessment should (1) Draw on authentic, detailed examples of your own teaching practice; (2) Represent your own reasoning about the aspect of your teaching practice in question; (3) Reflect your own rigorous and meaningful engagement with relevant pedagogical literature, educational data and institutional or sector wide policies and expectations.

Record Keeping Advice for Interactions with AI Tools

The declaration will also remind course participants that they “are advised to keep good records of their interactions with any AI regarding all their submissions in case they are later required as part of any further assessment, investigation or similar.”.

Disclosure about AI use

Whenever you submit work for assessment, you are required to provide an account of your use of GAITs, including which tools you have used, how and why you have used them, and how you have changed/adapted/developed the outputs of GAITs before submission. There will be a limit of 75 words per submission.

Examples:

“Used Chat GPT 3.5 for ideas generation: list of areas of practice to reflect on. Used Google Gemini to locate DOIs for articles and convert citation list to APA format.”

“Used Claude AI to support reflective practice, identify areas for exploration, and identify repetition in essay.”

Illustrative Examples of Reasonable vs. Unreasonable Commands to GAITs

Interactions with GAITs are triggered by commands, determined by the user. In the following table, ADC has attempted to offer some illustrative examples of the sorts of commands that would be potentially reasonable vs. those that would be totally unreasonable.

An illustrative list of potentially acceptable and definitely unacceptable commands or prompts for an LLM