Used thoughtfully, AI tools can be used to support academic practices:
Generative AI tools will continue to impact and influence the way that we engage in learning and teaching in higher education, in similar ways to previous technologies such as spell-checkers, online libraries and calculators, as well as in new and unknown ways.
It is not practical or beneficial to try to ignore, ban or eradicate AI. At Warwick, the preferred path is to work on developing assessment, teaching and learning in ways that promote responsible engagement with AI. As the dust settles and staff move beyond early responses to the rapid improvement of AI technologies, it will be important to carefully consider the purposes of assessments and the forms of evidence of learning that are valued in relation to modules, units and students.
The emergence of generative AI technologies like ChatGPT highlights already existing questions about how we can refine assessment in relation to what matters to staff, students, and society more broadly. It prompts us to consider alternative assessment options, including those that most closely align with educational principles promoted at Warwick. Pedagogical approaches that value assessment that promote students' understanding, and that is meaningful and valuable in its own right (see our guidance Academic Integrity) continue to be relevant in this changing context, and may help staff and students to navigate some of the challenges that are emerging.
By proceeding knowledgeably, thoughtfully and ethically in all aspects of learning, teaching and assessment, we will be well placed to navigate the challenges and opportunities of a dynamic technological landscape.
There are also some pressing concerns and practical challenges that apply to existing approaches. For example:
- What immediate changes are needed to the way assessment is designed and conducted?
- How do the rules and principles of academic integrity apply in relation to ChatGPT or other technologies?