Skip to main content Skip to navigation

Guidance on AI in Research

Introduction

The University is committed to providing support and protecting Researchers in their use of Artificial Intelligence (AI) tools, acknowledging the benefits that AI can bring, but also the risks it can pose.

This guidance provides information and advice to help Researchers identify their responsibilities and to raise awareness of risks when using AI tools in research. It aims to raise awareness of ethical and legal considerations relevant to using AI tools in research as part of the University’s commitment to research integrity, ensuring that all our research is conducted to the highest standards.

Researchers using or developing AI tools must adhere to the University’s Research Code of Practice and its commitment to the ‘Concordat to Support Research Integrity’, including the principles of: honesty, rigour, transparency and open communication, care and respect, and accountability.

This guidance has been produced by a Working Group of the University’s Research Governance and Ethics Committee (RGAEC), in consultation with Research & Impact Services, the Information and Digital Group, the Library, and Legal and Compliance Services, together with academic and research student representatives. The guidance was approved by the University's Research Governance and Ethics Committee on 4 February 2025 and will be reviewed and updated regularly.

Guidance

Integrity

AI tools can inadvertently perpetuate or amplify societal biases due to unrepresentative training data or algorithmic design. Researchers must mitigate this risk by carefully checking the composition and origins of training sets, or their coding, if they are building / developing an AI tool.

Data Protection

Researchers must exercise caution and adopt responsible practices when using AI tools and must be mindful of ethical and legal obligations under UK GDPR and other applicable laws and regulations. Training in the application of UK GDPR is offered by the University and should be engaged in by all Researchers.

The University strongly advises Researchers not to enter any personal, confidential, third-party, or business critical data / information / material into an AI tool. However, there may be exceptional circumstances where this is appropriate, and this would need to be discussed on a case-by-case basis. A DPIA (Data Protection Impact Assessment) is required by law by Researchers who plan to process personal data, where a type of processing is likely to result in a high risk to the rights and freedoms of individuals engaged by the research.

The following must also be considered:

Due to the large amounts of data being processed by AI tools, it is easy for Researchers inadvertently to use or reveal sensitive information hidden among anonymised data in the tool. Researchers should be careful only to input the amount of anonymised data they need for their research purposes to reduce the likelihood of any ‘linkage’ between datasets in the AI tool that could enable re-identification of subjects in anonymous data records.

Accountability

Researchers must take full responsibility for the use of AI tools in their research and any data / information / material they have entered into those tools. If students are unsure what this involves, they must consult their supervisors.

Research Misconduct

Researchers must note that content produced by AI tools is not original work. Using AI in research without appropriate declaration, acknowledgement and / or notification will be considered a form of research misconduct. Any breach of ethical principles or legal requirements due to the use of AI in research will be taken seriously and appropriate actions taken. Any concerns should be reported to the Head of Research Governance at .

The use of AI in research must be declared and clearly explained. Researchers must act with integrity and responsibility to ensure the degree of originality, validity, reliability and integrity of outputs created or modified by AI tools. This includes ensuring any outputs include accurate information as to the creation of the research and role of AI in this.

Funding Applications: Funders advise Researchers and their teams to use AI tools cautiously in developing their funding applications, including collaborative applications and should take account of applicable law.

The Research Funders Policy Group statement on Generative AI tools states that AI tools must be used responsibly when developing funding proposals and should be acknowledged in any outputs. These principles should also be adopted in the development of applications for internal funding calls.

Peer Review of funding applications: The Research Funders Policy Group also states that, for reasons having to do with confidentiality and intellectual property, peer reviewers should not input any content from funding applications into AI tools. Furthermore, they should not use AI tools to develop their peer reviews, as peer reviewing means bringing to bear the reviewer’s unique perspectives, expertise and experience.

More Information

Please visit the Warwick International Higher Education Academy web pages for more information on AI in Education and academic integrity.