Skip to main content Skip to navigation

Responsible and ethical use of AI

Warwick supports the responsible and ethical use of AI and Generative AI.

To equip our graduates with the skills and knowledge that they need for the future, we have a duty to explore and educate students on the benefits of the judicious use of technologies while also ensuring they understand the risks and ethical considerations.


Responsible use includes the following aspects:

  • Viewing AI as a collaborator and thoughtfully combining it with human intelligence(s) to achieve outcomes.
  • Using AI with honesty, in ethical and defensible ways, with human responsibility taken for the outcomes.
  • Being transparent about AI use, which means explicitly acknowledging that use and clearly documenting and attributing its contributions.
  • Making concerted efforts to maximise safe and secure use and minimise negative impacts.
  • Using AI in ways that mitigate bias and promote fairness, inclusivity and accessibility.

Supporting students' responsible use

The Warwick responsible use approach to AI includes supporting students to engage responsibly with AI. Alongside cultivating students’ development of AI literacy staff also have a responsibility to help students understand the reasons for their assessment policies and how to follow them. Educators should also teach responsibly with and about AI, and design assessment that is conducive to responsible use of evolving AI tools.

Understanding ethical issues related to AI

Ethics generally refers to shared values about what is appropriate conduct for individuals and groups. Acting ethically includes trying to improve situations for yourself and others, considering and reflecting on the implications of your actions, taking responsibility for your actions and being respectful of others. The emergence of new tools and technologies lead to adjustments of our practices and the need to integrate them into our systems.

It is important to think about the ethical risks that may be associated with using a given AI tool and what you can do to mitigate the ethical risks identified.


Key things to consider:

  • What data is an AI system trained on/drawing from and what biases, intellectual property issues and inaccuracies may be associated with the data source(s)?
  • What is the data privacy policy for a given AI tool and what data are you willing to share and what do you want to maintain control of and privacy for?
  • What are the advantages/disadvantages of using open source vs. commercial technologies? Or of using free vs. paid services?
  • When used in education, how can we maintain equitable access and fair use across all students?
  • What is the environmental impact of using AI technologies?

Resources

Expand the accordions to see resources for each topic