Skip to main content Skip to navigation

Artificial Intelligence (AI) Policy - Key Information for Students

(1) Our approach to Artificial Intelligence (AI)

Warwick Law School is committed to developing a holistic approach to AI in all aspects of our education. We seek to develop AI literacy among our students to instil good practice regarding the responsible use of AI as a tool to assist (but not replace) their learning and skills development. This includes promoting awareness about the ethical use of AI as well as about the concerns over the robustness and bias of many AI systems. We will monitor the use of AI by students to ensure that this is done responsibly and ethically. This includes monitoring the impact the use of AI has on development of essential academic skills by our students. Academic integrity must be ensured in all activities involving the use of AI.

We are committed to understanding the implications of AI for our own subjects as well as the ethical issues raised in all aspects of AI and to ensurin that what we teach includes consideration of the legal and ethical challenges brought about by the use of AI.

(2) Artificial Intelligence (AI)

(a) AI Technologies

Artificial Intelligence is an umbrella term covering a wide range of algorithmic and data-driven software systems. There are different types of algorithms (and combinations of different approaches) with varying capabilities, either of a deterministic or adaptive machine-learning type. AI performs a wide variety of tasks, not always clearly identified as “AI” in people’s minds (e.g., spell-checkers, auto-complete, recommendations on shopping websites or streaming services, etc).

The Organisation for Economic Co-operation and Development (OECD) defines an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.[1] All AI systems have in common that they offer varying degrees of automation for particular tasks, often more efficiently and quicker than humans.

A particular sub-category of AI is Generative AI Systems (“GenAI”). They can be useful for some academic tasks, but can also be misused in a way that violates our standards and expectations as to academic integrity.

GenAI, such as ChatGPT or Claude, is based on deep-learning neutral networks utilising “transformer” technology. They are trained on vast amounts of data and produces outputs based on that data. The robustness and reliability of outputs can vary dramatically, and there are many caveats to the use of generative AI systems.

(b) Limitations of GenAI

Importantly, AI does not process information in the way humans do. It does not develop knowledge. It lacks common sense and the ability to locate information in a wider context/experience. Its outputs are generally based on the data on which it was trained, and, in the case of “self-learning” (adaptive) algorithms, data acquired during deployment; it therefore lacks the ability to produce truly original and creative outputs. It does not have the ability for critical thinking – one of the key academic skills we seek to instil and develop in our students.

AI can be a very useful tool to help with certain tasks. It can uncover connections in data that might not be apparent to humans (although it is at risk of overfitting, or “hallucinating”, such connections) and can structure data better and faster than humans.

Recent advances in the performance and capability of certain AI systems, particularly generative AI systems, have renewed interest in this technology, and have led to an explosion in academic, policy and legislative work. A lot of unsubstantiated or exaggerated claims are made about AI, and it is sometimes difficult to get a clear sense of what the technology is capable of and what its limitations are. There are also concerns about the quality of the data on which AI systems are trained, particularly regarding its robustness and accuracy, and biases. Furthermore, the degree of energy use and the resulting environmental impact of the infrastructure (e.g., data centres) required for some AI systems is starting to attract more attention and concern.

(3) Permitted and prohibited uses of GenAI and other AI tools in the classroom

You are not permitted to use generative AI tools such as ChatGPT during the class unless your tutor expressly authorises this for a particular seminar task. It is important that you use class discussion to develop your own understanding and knowledge of each topic. Your contributions should be your own to provide you with the opportunity to test your knowledge and understanding of the material, and to get feedback from your tutor. You may use GenAI tools as part of your seminar preparation, provided that you do not use it to generate answers to seminar questions. You are not permitted to use live translation or live transcription software that translates the seminar discussion or lecture as it happens. Using such software involves the recording or capture of voices and transfer of voice data outside the University. This is not compatible with University policies and may also contravene data protection laws. If English is not your first language, you are allowed to use a translation tool that does not involve voice recording or voice capture other than your own to help you formulate your response during the seminar if you are unsure about the right wording. However, it is always better just to try - you will learn from practice and gain more confidence. More generally, you are not permitted to make any recordings of your seminars. Recording seminar discussions is contrary to University policy. Where applicable, lecture elements will be captured using the University's lecture recording system and will be made available subsequently (please ask your tutor). If you do not follow the above guidance, you might be asked to leave the class and to delete any recordings you have made. You might also be referred for disciplinary action.

(4) Academic Integrity

With regard to academic integrity, our policy builds on the University’s Institutional Approach to Artificial Intelligence and Academic Integrity. Any changes to the University’s Institutional Approach, or any new University policies on the use of AI take precedence over this policy whenever there is a conflict between them.

In accordance with both the University’s and the Law School’s academic integrity policy, the Law School prohibits copying/paraphrasing either whole outputs or elements of outputs generated by a GenAI system and submitted by students as their own work. Similarly, the use of AI to complete certain parts of an assessment, such as analysis or evaluation, is not permitted.

However, even where the use of GenAI systems is not prohibited, the Law School discourages the use of such outputs even with correct and complete attribution. Assessments should be used by students to demonstrate their knowledge of the subject (i.e., what has been taught, including set readings) and their understanding and ability to utilise this in responding to specific questions. The use of GenAI in this process might make it more difficult for markers to establish how well a student has demonstrated this. The way in which such outputs are used as part of an assessment will be considered in grading the assessment against our marking criteria.

The use of an AI system for any aspect of completing an assessment must be disclosed by a student. A failure to do so constitutes an academic misconduct. Such a disclosure must cover the following points:

  • Why was a GenAI system used (help in understanding the question; help in structuring work based on arguments developed; assist with initial summary of readings etc)?
  • Which GenAI system or systems was/were used (e.g., ChatGPT, Claude etc)?
  • How has the AI output been used in preparing the assessment? (as a research tool, to test arguments etc; see table below for acceptable uses).

The way AI generated outputs are used by you in drafting your assessments will be taken into account when grading the assessment against the generic grade descriptors and assessment-specific marking criteria.

(5) Permitted and prohibited uses of GenAI – Guidance to Students

Unless expressly required for a specific assessment task, Warwick Law School does not encourage the use of GenAI tools for assessments. Students should be aware that GenAI tools can weaken the quality of your work – particularly if you rely on GenAI for accuracy, relevance and rigour. You should be particularly mindful of the tendency of GenAI tools to cite non-existent resources or information (“hallucinations”). GenAI may generate inaccurate or otherwise poorly constructed arguments. GenAI does not have the ability to demonstrate critical thinking, nor the ability to be genuinely creative. Most importantly, assessments are there for you to demonstrate to us (and to yourself!) how well you have understood what you have studied.

Nevertheless, WLS acknowledges that some students may use GenAI technology when preparing for assessments and writing essays, and may want to gain experience of utilising it for their future legal careers. Below, we provide a clear rule and interpretative guidance on what would constitute tolerated and prohibited use of GenAI. This is always subject to specific instructions given for each assessment.

Rule and Guidance

The key overarching rule is: GenAI can act as a personal assistant for you (e.g. assistance in understanding key issues, help with research, proof-reading your work etc.). However, it must never be used to create the work, or certain parts of the work, for you. You must not submit any AI-generated output as your own work.

The table below summarises what is acceptable and what is prohibited in a bit more detail. Note that any use of GenAI may be prohibited for specific assessment tasks. This might be the case e.g., where the assessment tests your key legal or academic skills.Here, your own unaided work would be essential. Always check the assessment instructions before you use any GenAI tool!

YOU CAN YOU MUST NOT
  • Ask questions to GenAI to check your understanding of your assessment question

o Example prompt:“Is the following question asking for a discussion on topic X?”

  • Ask GenAI to suggest an outline of the essay you are going to write, based on the points you provide to it.

o Careful here! Do not simply ask GenAI to prepare an outline for you. Do your preliminary research and have your points ready. Even slight variations in the prompt can produce vastly different results as can repeating the same prompt twice or more.

o Example prompt: “Suggest an outline for the essay I am going to write on the topic X. I will argue for/against Y and the points I will make in my essay are A, B and C. My supporting evidence/examples are D and E.” or “What would be the best order to argue for/against Y with the points A, B and C, and examples D and E?”  

  • Ask GenAI to check the grammar, flow, consistency, language, tone and style of the essay. oExample prompt: “Please proof read this essay and correct/highlight any errors in spelling, language or grammar

o Example prompt: “Check the tone and style of this essay and highlight the parts that need to be rewritten, but do not rewrite them!” ·Use GenAI as a search engine/database to find further sources.

o Careful here! GenAI tools may sometimes generate sources that do not exist. Always remember to check that the source exists; that the reference/citation is accurate ( check books and journals though the library pages; check case databases for cases); verify that any quotations were taken from the indicated source etc.

  • Ask GenAI to summarise the content of academic articles, books, reports or other sources, although if the work is relevant to your research, you should then read it in full. Careful here! Be mindful that uploading material protected by copyright can infringe copyright rules. If you are unsure, please enquire with the library first.
  • Ask GenAI to provide feedback on the project you wrote, before submitting it.

o Example prompt: “Provide some feedback on the consistency of the arguments made in the project, without rewriting it for me.”

o Please remember that GenAI feedback may not reflect the assessment criteria. It’s important to refer to the module's assessment criteria to fully understand the expectations.

  • Ask GenAI to generate your assessment answer for you. Therefore, you should never start a chat with an GenAI tool, in which the GenAI would generate points for you to include in your essay. You must have developed your own initial thoughts about the question, and you should have developed provisional arguments, examples, ideas and your overall response to the essay question (e.g. do you agree/disagree and why) before seeking the assistance of an GenAI tool.

o Example prompt:“I need to write an essay, but I could not understand the question. Can you write me an essay on the following essay question: [the essay question].”

  • Ask GenAI to generate some ideas, arguments, examples from scratch.

o Example prompt: “I need to write an essay on the topic X, what should I argue, what should my stance be? Can you suggest some arguments I could use?”

o You can only ask GenAI to provide guidance on the accuracy or validity of your arguments or improve the ideas you already have. Asking GenAI to generate ideas and arguments do not result in accurate or original supervised projects. Therefore, you must always double check what the GenAI tools suggest to you and never substitute them for your own voice.

  • Ask GenAI to rewrite any part of your essay.

o Example prompt:“Provide some feedback on the following essay and rewrite the parts that need improvement.” or “Can you provide some feedback on the following essay?” (Careful here!GenAI tools usually tend to rewrite/revise the text you provide them, unless you clearly restrict them and/or prohibit them from rewriting the whole text in accordance with the feedback they provide. Therefore, even when you ask for feedback, you must ensure that it is you who makes the necessary changes depending on that feedback, not AI.)

o Example prompt: “Check the tone and style of the following essay and make the necessary changes/improvements to make it sound more formal.” (You should only ask for suggestions or for some emphasis to be made on the parts that need improvement, you should never ask any GenAI tool to change the sentences you have written. Remember, it should entirely be your own work, GenAI should not and cannot be a contributing author!)



[1] Recommendation of the Council on Artificial Intelligence, version of 3 May 2024; available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449