Skip to main content Skip to navigation

Academic Integrity

As a student you are responsible for your academic integrity. Warwick defines academic integrity as

Academic integrity means committing to honesty in academic work, giving credit where we've used others' ideas and being proud of our own achievements.

The last few years have seen an increased in submitted work that fails to uphold academic integrity. Unfortunately, the consequences for you can be significant. Warwick is committed to ensuring that its courses and degrees uphold the highest standards of academic integrity and you are provided with guidances in how you can ensure you uphold your professional commitment to academic integrity.

A key concept is your "intellectual ownership" of the material you submit based on the knowledge you gain and the cognitive enhancements your learning generated.Characteristics of your intellectual ownership include:

  • You have tried multiple approaches to solve a problem, some of these might have been dismissed quickly. That is, you can demonstrate how you evaluated the problem against the module material to identify an potential solution strategy.
  • You can explain how you brought together the pieces of solutions. That is, you can demonstrate how you synthesised and the selected module material to solve the problem.
  • You can pinpoint the relevant material from the module that you used in your solution.
  • You can identify minor errors in your own work or how you would improve/refine your approach next time.

These characteristics all occur entirely naturally when you engagement with a module, you do not need to try. These characteristics emerge from the learning cycle where you identify what you know and don't know, engage with your module material to learn and link material together, differentiate between relevant and irrelevant material, discard approaches in favour of those leading to a solution, recognise what you would do differently in the future.

Large Language Models

The recent emergence of Large Language Models (LLMs) such as ChatGPT. has seen an increase in work submitted that is entirely machine generated. This work when presented to experts in the field (such as your lectures or project supervisor) are easy to spot. If you use an LLM to produce a piece of work, then they can produce very convincing wrong answers. The purpose of the below is to illustrate to you using material you understand how badly wrong these models can get.

An interesting and slightly comical example is provided in the following video, where ChatGPT played Martin Bot at Chess. It did not go well.

A secondary example is shown when both ChatGPT and Google Gemini (was Bard) rewrite the rules of Scrabble.

Why does this happen?

In the end, LLMs are built on statistical and probability models that you will learn about during your degree. These models use learning sets. For more technical material unless these learning sets are relevant to the exact question, it is likely you will get some poor/useless responses. You must have the knowledge to interpret the output, if you don't and just copy what an LLM tells you, then you will end up presented work that looks to experts like that seen in the chess and scrabble examples.

LLMs are implemented on computers. Computers can be made in Minecraft with a bit of red stone. Do you really want to trust a really big bit of red stone to do the work for you?