Skip to main content Skip to navigation

IMP 02: Artificial Intelligence Information Compliance Policy

Information Classification - Public

Policy introduction and purpose

Artificial Intelligence (AI) is an evolving technology that tries to simulate human intelligence using machines. AI encompasses various subfields, such as machine learning (ML) and large language models (LLMs), which allow the system to learn and adapt from training data.

The purpose of this AI policy is to ensure the safe and responsible use of AI technologies within the University of Warwick.

Policy regarding the impact of AI on academic integrity will be determined by education policy and quality. Please see the Academic Integrity and Artificial Intelligence pages.

AI solutions should be fair and exercise caution in amplifying human biases, but it should also make sure they are not a source of inequality among groups or communities. It follows that diversity and inclusion within societies is another dimension of ‘AI fairness’ that refers to the impact of the use of AI in a specific project that carries societal context.

The policy plays a vital role in protecting University members and information assets, including personal data, administrative and research information.

This policy aims to set out principles that address the challenges associated with AI implementation, such as accountability, transparency and explainability, and data privacy.

Scope and definitions

Artificial Intelligence (AI) refers to the capability of machines to exhibit human-like abilities, including reasoning, learning, planning, and creativity. AI empowers technical systems to perceive their environment, process the information, solve problems, and pursue specific goals autonomously. An AI system can adapt its behaviour by learning from the outcomes of its previous actions. For the purposes of this policy, AI is characterized as a system that autonomously generates new outputs in diverse formats such as text, images, or sounds, which includes music, singing, voice narration, and which demonstrate the advanced capabilities of AI in creating and interacting with various forms of digital media.

The policy covers everyone who has a contractual (formal or informal/implied) relationship with the University, including employees, students, visiting academics, and consultants. Please note that this list is not exhaustive.

For purposes of this policy, we will refer to everyone covered as “members.”

The policy covers all information processed by the University, regardless of ownership or format.

For a glossary of terms used in this policy, refer to the Information Management Glossary of Terms.

Responsibilities (policy and operational)

The Chief Information & Transformation Officer (CITO) retains overall accountability for this policy. The Chief Information Security Officer (CISO) and Data Protection Officer (DPO) has delegated authority for ensuring the policy meets legal and regulatory requirements; for keeping this policy up to date; and for ensuring that controls, checks, and audits are carried out as part of compliance with this policy.

Operational responsibilities

Adherence to this policy and its supporting standards and Standard Operating Procedures (SOPs) is achieved by following the policy principles and the data restrictions and security provisions. It is everyone’s responsibility to ensure that they follow this policy.

Users of AI tools are responsible for the input provided, the output generated, and the use of that output. For example, for LLMs the output needs to be checked for accuracy to avoid hallucinations.

Role Function
Designate of Head of Department (e.g. academic lead on research, individuals with delegated authority for information, system administrators) Responsible - for overseeing compliance with the policy within areas of responsibility
Head of Department Accountable - for compliance with this policy within departments
Information Risk and Compliance Team (with escalation to CISO and CITO required) Consult - to discuss organisational level compliance with the policy
IDG Digital Business Partners Inform - must be informed of the content of the policy to communicate it to their departments

Principles of the policy

The following clauses must be adhered to:

  • The Information Digital Group (IDG) is committed to supporting the University in its application of AI.
  • The use of these tools in research is subject to authorisation by a University Research Ethics Committee.
  • Regardless of costs, all new uses of AI tools and services/products or services, must go through appropriate procurement processes. Use of AI tools within Professional Services is subject to IDG approval processes. This includes where existing tools are used for a new purpose. Please view the IDG software procurement page.
  • All contents, inventions or creations generated by AI tools during University operations shall be addressed in accordance with Regulation 28 Intellectual Property rights.
  • Adherence to the principles of the Information Management Policy Framework must be applied when using AI tools.
  • AI must not be used to create unfair or inequitable conditions for users and participants.
  • Users must recognise that AI can be biased and therefore produce discriminatory or otherwise inaccurate content which must be factored into the review process of inputs and outputs.
  • Those who use or deploy AI will be responsible for using it in accordance with the acceptable use policy.
  • The use of AI must respect the privacy of users and must not be used to collect, store, access or share their personal data without their consent. This includes not generating a likeness of others in images, videos, audio, or written form without consent.
  • In order to ensure the transparency in the use of AI in any process this must be documented and available on request and where it includes personal data then data subjects must be notified. Where disclosing the use of AI will prejudice the interests of the University, this information can be withheld but the requirements of UK GDPR will still need to be met.

Data restrictions

Certain types of data must never be put into any AI software, without prior approval from IDG or the Research Ethics Committee. These include:

  • Passwords and usernames.
  • Personally identifiable information (PII) or other sensitive or confidential material (PII is any information that can be used to confirm or corroborate a person’s identity).
  • Any data related to University Intellectual Property.
  • Any data that is protected by copyright, unless explicit permission for its use with AI tools has been obtained (subject to fair use).
  • Any non-PII data from third parties where the individual has not explicitly consented for their data to be used with AI, except for data that is clearly already in the public domain.
  • Any non-PII data from third parties where the explicit use of the data with AI has not been authorised by a University Faculty Research Ethics Committee application, irrespective of whether the data is in the public domain.

AI must not be used at the University to misrepresent the opinions of any person living or deceased or attributed to that person as their original view.

Where personal data is put into AI then requirements set out by Legal and Compliance Services must be met such as the completion of DPIA.

Security

  • Consideration must always be given to the confidentiality, integrity, and availability of information assets at the University when utilising new AI technology. Users must understand the data handling practices of the AI vendors that they work with.
  • There must always be an appropriately qualified human involved in the quality assurances of the AI output (sometimes referred to as keeping ‘the human in the loop’) so as not to cause harm and distress to individuals or damage the reputation of the University.
  • As well as damage to individuals and University reputation, other risks must also be considered when using AI and publishing AI output, which include potential exposure to legal penalties, financial forfeiture, and material loss, resulting from failure to act in accordance with industry laws and regulations.
  • AI must be used in accordance with the Information Management Policy Framework, and research must abide by the research data management policy.

Exemptions

‘Exemption requests’ under this policy must be submitted to the CITO or their designate via an exemption request form.

Exemptions to this policy may only be granted by the CITO or their designate.

This policy may have an impact on users of assistive technology or assistive software due to their disability. These individual cases will be considered on a case-by-case basis.

Compliance monitoring

It is everyone’s responsibility to report instances of non-compliance with this policy to the Information Risk and Compliance Team. The Information Risk and Compliance Team will use report data, and any other tools made available to it, to monitor compliance with this policy, standards and SOPs. Issues that are deemed to merit escalation or further discussion will be brought to the attention of the Information Security and Data Protection Committee via the CISO. Where non-compliance presents a significant risk, it will be subject to the staff or student disciplinary process.

Internal references

Version/document control

Version Date created Date published Next review Notes/outcomes
1.0 May 2025 17 June 2025 June 2026 A new policy within the Information Management Policy Framework (IMPF)