The University is committed to providing support and protecting Researchers in their use of Artificial Intelligence (AI) tools, acknowledging the benefits that AI can bring, but also the risks it can pose.
This guidance provides information and advice to help Researchers identify their responsibilities and to raise awareness of risks when using AI tools in research. It aims to raise awareness of ethical and legal considerations relevant to using AI tools in research as part of the University’s commitment to research integrity, ensuring that all our research is conducted to the highest standards.
Researchers using or developing AI tools must adhere to the University’s Research Code of Practice and its commitment to the ‘Concordat to Support Research Integrity’, including the principles of: honesty, rigour, transparency and open communication, care and respect, and accountability.
This guidance has been produced by a Working Group of the University’s Research Governance and Ethics Committee (RGAEC), in consultation with Research & Impact Services, the Information and Digital Group, the Library, and Legal and Compliance Services, together with academic and research student representatives. The guidance was approved by the University's Research Governance and Ethics Committee on 4 February 2025 and will be reviewed and updated regularly.
AI tool: Any type of artificial intelligence system that identifies patterns and structures in data / information / material / concepts / ideas and generates content that resembles humanly created content, including audio, code, images, text, simulations, and videos in response to instructions (prompts).
Intellectual Property (IP): Creative outputs arising from literary, artistic, industrial and scientific endeavours on the part of humans such as the results of research or creative projects. Reg. 28 Intellectual Property Rights
Research: Any gathering of data/ information and facts, or concepts and ideas, to create new knowledge and/or use existing knowledge in a new and creative way to generate new concepts, methodologies and understandings. Activities include planning and design; data collection and analysis; knowledge exchange and impact; reporting and publication; archiving, sharing and linking of data; and re-use of the outputs of research (secondary data).
Researcher: Any staff member, including visiting or emeritus staff, associates, honorary or clinical contract holders, contractors and consultants, and any student (undergraduate, postgraduate taught, postgraduate research), undertaking research on the University’s premises, or using its facilities, or on behalf of the University.
The principles set out in this guidance apply to all researchers as previously defined.
This guidance provides information and advice to help Researchers identify and manage risks when using AI tools in research and sets out responsibilities and expected standards. If a Researcher intentionally or recklessly disregards this guidance, they may face investigation under the Code of Practice for the Investigation of Research Misconduct.
Misconduct in research includes: the fabrication or falsification of research data; improper handling of information on individuals collected during research; or the use of another person’s ideas, work or research data without appropriate acknowledgement. Researchers are responsible for any such practices, even if they occur inadvertently through the use of an AI tool.
Researchers must consider all risks that may be relevant to their research that arise from AI. These include but are not limited to:
Integrity
Reproduction of false / inaccurate information.
Factually incorrect assessment of research analysis and results.
Unrepresentative summary of other’s ideas.
Unintended introduction of biases into research analysis or reproduction of offensive content.
Information Security
Harm to individuals and breach of ethical standards through personal data being used or stored inappropriately.
Non-compliance with applicable data protection legislation.
Exposure of data that could be used to breach national security and cyber security or otherwise cause disruption to University operations.
Exposure of research results which could breach funder or third-party terms and conditions / IP restrictions.
Accountability
Incorrect or inappropriate acknowledgement of AI generated data used in publications or incorrect referencing of the contribution of AI.
Failure to declare that AI has been used or has not been used in the production of research work, where required.
Improperly / inappropriately acknowledging ideas derived from the work of others.
Researchers can use Microsoft CoPilot, accessed via a Warwick account; this is the recommended AI platform for University staff and students. This tool should be used in accordance with these guidelines.
Researchers should always consider the sustainability of their research in line with the University’s Sustainability principles, taking into account the large amounts of energy and resource required to train and apply AI tools.
Guidance
Integrity
AI tools can inadvertently perpetuate or amplify societal biases due to unrepresentative training data or algorithmic design. Researchers must mitigate this risk by carefully checking the composition and origins of training sets, or their coding, if they are building / developing an AI tool.
Data Protection
Researchers must exercise caution and adopt responsible practices when using AI tools and must be mindful of ethical and legal obligations under UK GDPR and other applicable laws and regulations. Training in the application of UK GDPR is offered by the University and should be engaged in by all Researchers.
The University strongly advises Researchers not to enter any personal, confidential, third-party, or business critical data / information / material into an AI tool. However, there may be exceptional circumstances where this is appropriate, and this would need to be discussed on a case-by-case basis. A DPIA (Data Protection Impact Assessment) is required by law by Researchers who plan to process personal data, where a type of processing is likely to result in a high risk to the rights and freedoms of individuals engaged by the research.
The following must also be considered:
Due to the large amounts of data being processed by AI tools, it is easy for Researchers inadvertently to use or reveal sensitive information hidden among anonymised data in the tool. Researchers should be careful only to input the amount of anonymised data they need for their research purposes to reduce the likelihood of any ‘linkage’ between datasets in the AI tool that could enable re-identification of subjects in anonymous data records.
Researchers must review the privacy policy of the AI tool provider for any AI tool they use and the privacy settings of the tool to ensure compliance with relevant Data Protection regulations.
Researchers should check the AI providers’ terms and conditions regarding training of models. As AI models evolve, providers are adapting their terms and conditions to allow them to retain information entered into their products for model training purposes. Researchers must do their best to scrutinise the terms and conditions of the AI providers to understand their rights and any limitations. Some AI tools allow users to “opt out” of giving permission for content to be used to train the model. Wherever this is an option, Researchers must choose to opt out before entering any content into the tool.
AI tools may be susceptible to cyberattacks, potentially exposing sensitive information to unauthorised users. Researchers must employ robust cybersecurity measures to protect data and systems, in line with the University’s guidance and must always check the security settings of the AI tool they are using, especially if entering data into an AI tool outside the UK.
Researchers must abide by the University’s information management policy framework Information Management Policy Framework and should contact the IDG Helpdesk to seek further advice if required.
There are Intellectual Property considerations when using AI tools because entering content into an AI tool, including confidential or third party-owned information, could be considered as publicly releasing that information. AI tools may retain the rights to use any content entered to train their model. Intellectual Property, including copyrighted material, can only be used to train an AI model if there is consent from the rights holder or if an exemption to copyright applies. However, due to the ongoing emergence of new AI tools, there is no clear-cut guidance on what counts as an exemption. The user terms of service for each AI tool should outline what rights are granted to developers regarding any content entered into that tool.
Wherever possible, Researchers should seek to avoid entering large quantities of Intellectual Property, including copyrighted material, generated from research into AI tools. This is particularly important for any Intellectual Property that is unpublished, commercially sensitive or potentially patentable.
Researchers must ensure the necessary permissions are in place for them to input any data / information / material legally into an AI tool. Researchers must only enter third party content, including copyrighted material, into an AI tool when express permission is granted from the owner of that Intellectual Property, even if content is made available by licenses such as Creative Commons.
If Researchers generate any research that may contain patentable subject matter, they must not enter this content into an AI tool.
Accountability
Researchers must take full responsibility for the use of AI tools in their research and any data / information / material they have entered into those tools. If students are unsure what this involves, they must consult their supervisors.
Research Misconduct
Researchers must note that content produced by AI tools is not original work. Using AI in research without appropriate declaration, acknowledgement and / or notification will be considered a form of research misconduct. Any breach of ethical principles or legal requirements due to the use of AI in research will be taken seriously and appropriate actions taken. Any concerns should be reported to the Head of Research Governance at .
The use of AI in research must be declared and clearly explained. Researchers must act with integrity and responsibility to ensure the degree of originality, validity, reliability and integrity of outputs created or modified by AI tools. This includes ensuring any outputs include accurate information as to the creation of the research and role of AI in this.
Funding Applications: Funders advise Researchers and their teams to use AI tools cautiously in developing their funding applications, including collaborative applications and should take account of applicable law.
The Research Funders Policy Group statement on Generative AI tools states that AI tools must be used responsibly when developing funding proposals and should be acknowledged in any outputs. These principles should also be adopted in the development of applications for internal funding calls.
Peer Review of funding applications: The Research Funders Policy Group also states that, for reasons having to do with confidentiality and intellectual property, peer reviewers should not input any content from funding applications into AI tools. Furthermore, they should not use AI tools to develop their peer reviews, as peer reviewing means bringing to bear the reviewer’s unique perspectives, expertise and experience.
Any research involving human participants, their tissue, or their data must undergo ethical review from a Research Ethics Committee. The University Research Ethics Committees ask for information on the collection, analysis and storage of this data. Any use of AI tools must be explained as part of the ethics application; the Committee will consider any ethical implications and advise accordingly.
Any research project which aims to develop AI tools must also consider the future use of the tool and integrate principles of responsible research and innovation into their planning and design.
Researchers may use approved AI tools to assist with ethics applications and associated documentation, such as Participant Information Leaflets (PIL), but they should have a full understanding of the outputs produced, and take responsibility for the ethics application, including the risks outlined. If AI tools are used for translating PILs or other participant-facing documents, the translated versions must be checked (by a native language speaker) for any inaccuracies or misleading information, taking into account the cognitive level and cultural context of the participants. The use of any AI tools to develop ethics applications and associated documentation must be declared and explained as part of the application process.
Researchers must detail any use of AI tools in collecting, analysing or otherwise processing research data in a Data Management Plan. Researchers should explain the reasons for using particular AI tool(s), including an evaluation of the risks associated with using those tools.
Researchers must keep careful records of how they have used AI tools in their research and make sure that this use of AI and the specific tool used is explicitly declared. It is recommended that any sharing of research data generated using AI tools include information on the specific tool used, when the tool was used, and how content was generated, in order to promote reproducibility. Guidance on data sharing agreements should be sought from Research & Impact Services.
Researchers should specify the terms for reuse for any data that they deposit in a repository and consider including explicit information about how the data can be used by AI tools. When depositing data in a repository, Researchers should follow the guidelines of the repository for acknowledging the use of AI tools. Researchers using open-source data generated using AI tools must abide by any conditions for reuse specified for that material by the owner of the material.
Researchers using data governed by permission given by research participants must make sure that the use of the material with an AI tool is in line with the original consent given by the participants.
Authors are accountable for the accuracy, integrity and originality of their research outputs, such as publications, including any use of AI tools. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics, even if this happens inadvertently through the use of an AI tool.
Authors who use AI tools in writing a manuscript, producing images or graphical elements, or in the collection and analysis of data, must disclose how the AI tool was used, and which tool was used.
Researchers must take care to avoid plagiarism. Research outputs must be the authors’ own work, not presenting others’ work or output from AI tools without appropriate referencing. Individual journals and publishers may have specific requirements or guidelines relating to reporting the use of AI tools, and these must be followed where applicable.
AI tools cannot be listed as an author of a paper given the need for accountability; AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work.
The above guidance for research publications also applies to research theses and to material prepared for dissemination or presentation to workshop or conference audiences.
Some journal editors and publishers prohibit the use of AI tools for peer review of publications. This is partly due to the confidentiality issues outlined in this guidance. However, peer reviewers are consulted for their expert knowledge, critical thinking, and assessment of research originality: the use of AI tools poses risks of generating incorrect, incomplete, or biased conclusions about manuscript submissions. Even if a publisher does not have an AI policy, peer reviewers should avoid the use of AI tools.
Peer reviewers also have a role to play as subject experts in spotting inconsistencies or inaccuracies that may occur when AI is used inappropriately to generate content.
More Information
Please visit the Warwick International Higher Education Academy web pages for more information on AI in Education and academic integrity.