Skip to main content Skip to navigation

News

News from EPQ

Catch up with the latest here.

Through our monthly emails, we round up news, views and resources on all things education policy and quality in one place - covering work underway to support great education, what's coming up, and ways to get involved.

Show all news items

Generative Artificial Intelligence tools and academic integrity

By Kim Robinson, Lee Griffin and Sam Grierson

The conversation around and consideration of the role of Generative Artificial Intelligence tools (GAIT) in Higher Education continues across the sector.

Peer-led approaches through WIHEA

Here at Warwick, the WIHEA Learning Circle looking at detection/ethics is making good progress on a new student declaration, staff guidance and student guidance.

We are developing a model that embeds cultural elements, helping to remove the motivation to engage in academic misconduct through support and dialogue, and a reinforcement that we're part of an academic community at Warwick. Whether studying, teaching or researching, we're all taking part in an expert conversation that must meet standards of academic integrity. When we all meet these standards, we can take pride in our own academic achievements, as individuals and as an academic community. Below that cultural support is a process of assessment design, assessment 'delivery', and finally detection. Together, these elements will promote academic integrity and reduce the risks of academic misconduct.

Concerning ethics and detection, the approach we're taking is a little like a tripod - it needs three legs to stand. Those legs are :

  • The student declaration
  • The department guidance given to students via the student handbook
  • Guidance given to colleagues/students centrally

These are important to frame the use of Artificial Intelligence and make it explicitly clear to students what is and is not acceptable – the purpose of the declaration is to focus the mind on specific requirements. Central guidance cannot form a ‘one size fits all’ answer; it is likely that acceptable use policies will need to be considered at the component level under general central principles. Each needs the other to function fully.

Progress is being made on all strands, with publication of work expected in May 2023. Sam Grierson has given an update on a key aspect of the debate being considered: what are the unique differences between humans and AI?

'From an Artificial Intelligence Ethics and Academic Integrity perspective the declaration and guidance to both students and colleagues will endeavour to encourage human intelligence and wisdom over artificial intelligence and wisdom. The WIHEA Learning Circle is in favour of promoting positive behaviour and attitudes and responsible use of Generative Artificial Intelligence tools. We have for decades been using technology, including Artificial Intelligence, to help us to become more efficient and, to all extent and purpose, wiser. We have used these tools to assist, guide, motivate and stimulate the unending potential of the human mind. We have used and arguably embraced this technology to further our passions and develop critical reflections. Whether it is through basic tools such as MS Word editor, calculations in MS Excel, scientific calculators, statistical software packages e.g. SPSS, library search engines e.g. Shibboleth and OpenAthens, the list has become endless. Artificial Intelligence has therefore been present in HE for an extensive period and will continue well into the future. Generative Artificial Intelligence tools create the data that humans can then make decisions around.'

The Learning Circle has also developed some new guidance on applying the University's assessment design principlesLink opens in a new window, now published and hosted on the ADC website. In time examples of assessment will be produced, with a target publication date in May.

Turnitin and AI detection

The silver bullet of a technological solution to determining whether we are marking genuine assessment and artificially generated content appeared to take a step forward at the beginning of the month, when Turnitin announced the rollout of its AI detection tool to the existing platform.

Almost immediately, there was cross-sector concern that launching this facility to staff and students without preparation could be problematic. The University's response (sent to AI leads and HoDs on 30th March 2023) read as follows:

We are aware that colleagues have been keen to gain a detection tool, but the University has not been able to test this functionality, and from the limited information given believes the approach Turnitin has taken to releasing the functionality is flawed. This is not unique to Warwick, many institutions have raised concerns with Turnitin about the approach including the Russell Group, UCISA, Jisc, HeLF and APUC.

Those concerns include:

  • This is currently a 'black box' and while Turnitin's FAQ on the matter warns against false positives and misses, we don't understand fully the likely level or causes of these.
  • We have not been granted access to the tool, so cannot advise or assist colleagues meaningfully on its use.
  • In turn, this may lead to a significant number of calls to the IT helpdesk, which will be overwhelmed and unable to help due to the lack of information.
  • Colleagues may allege Academic Misconduct in error based upon a tool we do not understand, causing issues for students.
  • Because it can't be tested or checked, it's not possible to write meaningful guidance and help for colleagues.

As a result, and in line with many other institutions, we will not enable the tool until it has been assessed, so appropriate guidance and support can be given.

Further exploration of this tool and its functionality is being led by IDG. This is being done in conjunction with other institutions who are using the functionality, and we are trying to gain a limited test space for our own testing and training design. Initial reports are that it is quite trivial to deceive the detector into believing text came from a human, but the larger worry of false positives seems to be low. This is under continual review, and should the position change we will inform colleagues via the AI leads network and HoDs.

Sector discussions on AI - QAA and WonkHE

As the sector continues to play catch up to the pace of technological development, the focus for many is moving to how we adapt to delivering teaching, learning and assessment in this new reality.

QAALink opens in a new window have delivered three sessions over the last month exploring and challenging different thinking on Generative AI for higher education, drawing on panellists from universities across the UK and beyond, Jisc and the International Baccalaureate. An outline of the webinar content and a link to each session recording are given below.

Their first webinar, ChatGPT: To ban or not to ban?Link opens in a new window (22nd March 2023) discusses the pace of development of Generative Artificial Intelligence Technology (GAIT), detection, feasibility of banning AI, developing critical use of AI by students and developing guidance for staff and students.

The second session, ChatGPT: How do I use it as a force for good?Link opens in a new window (31st March 2023) explores the use of AI in the classroom and its use as a tool for developing learners' writing skills. It also explores attitudes to assessment and AI.

The final webinar was ChatGPT: What should assessment look like now?Link opens in a new window (18th April 2023), which examined academic integrity and authentic assessment, ethical use of ChatGPT and preparing students for an AI-enabled world.

WonkHELink opens in a new window delivered more perspectives on a range of aspects, including the use of AI, policy context, impact on teaching and assessment and the use of AI in HE administration, in The Avalanche is HereLink opens in a new window on 19th April 2023.

Wed 26 Apr 2023, 10:55 | Tags: Artificial Intelligence, Academic Integrity