8. Recommendations and Conclusion
- Home
- 1.Formal Report
- 1.1 Introduction to Project
- 1.2 The Emergence of ChatGPT and Limitations of GPT-3.5
- 1.3 Understanding LLMs and Evolution of AI Models
- 1.4 Extending LLM Capabilities and Introduction of ChatGPT o1
- 1.5 A Step Change in AI Capabilities and Key Findings
- 1.6 Performance of AI Models and Urgency for Institutional Action
- 1.7 Recognising the Problem and Specific Regulations
- 1.8 Recommendations and Conclusion
- 2. Student Conversations
- 3. How ChatGPT Performed on University-Level Work
- 4. Suggested Changes and Future Direction of Regulations
- 4.1 Developing Clear Policies on AI Use
- 4.2 Enhancing Student Support and Guidance
- 4.3 Emphasising Skills That AI Cannot Replicate
- 4.4 Adapting Pedagogy and Innovating Assessments
- 4.5 Encouraging Collaborative Solutions Among Stakeholders
- 4.6 Allocating Resources for Training and Support
- 4.7 Adopting Alternative Assessment Methods
- 4.8 Relying on Honour Codes and Academic Integrity Pledges
- 4.9 Designing AI-Resistant Assignments
- 4.10 Using AI Detection Software
- 4.11 Implementing Oral Examinations (VIVAs)
- 5 Opportunities AI Presents
- 6 Tips For Markers on Spotting Potential AI Usage
Recommendations
- Verification: Collaborative AI Learning Programmes
- Lecturers: Develop educational initiatives where both students and lecturers engage in learning about AI tools together. These programmes should focus on the critical evaluation of AI outputs, helping both students and lecturers learn to verify the accuracy and reliability of AI in academic work. These initiatives are likely to be more formative than summative, enhancing understanding rather than just assessment.
- Transparency: Facilitate Open AI Dialogues
- Lecturers: Encourage ongoing discussions about AI usage in academic settings, where both students and lecturers share their experiences and insights. This can be done through workshops, seminars, and informal discussions that promote transparency about AI use, helping to remove stigma and ensuring AI becomes an integrated, openly discussed part of the learning process. The Academic Development Centre (ADC) should support these efforts by providing examples and guidance for lecturers.
- Ownership: Promote Responsible AI Experimentation
- Lecturers: Provide structured opportunities for students and lecturers to experiment with AI tools in a collaborative setting. These sessions should encourage participants to take ownership of their learning by critically assessing AI outputs, understanding the implications of AI-generated content, and discussing the ethical responsibilities associated with AI use.
- Verification: AI Guidance for Specific Uses
- Students: Provide students with detailed instructions on how to effectively use AI for academic tasks. This includes using AI for summarising notes, finding quotes, creating personalised learning experiences, or understanding complex topics. Students must also be made aware that AI tools are not infallible and that it is their responsibility to verify the accuracy and relevance of AI-generated content. Guidance on AI usage should be tailored to specific departments or modules, as use cases will differ across disciplines.
- Transparency: Contextual AI Use Disclosure
- Students: Students must disclose their use of AI in assignments only if they rely heavily on AI-generated content or include it directly in their work. In such cases, they should cite the AI tool as they would any other source. If AI is used for general research or understanding, it is preferable to cite the sources consulted after using the AI, rather than the AI itself. Keeping chat logs is recommended for situations where questions of authenticity arise, though not mandatory for general tool usage. Promote responsible experimentation by fostering honesty and transparency between lecturers and students, ensuring that expectations are clearly understood. Consider including questions in the assessment, such as: 'Did you use AI?' 'Why?' and 'Which tool?' etc.
- Ownership: Accountability in Academic Work
- Students: Students must be prepared to explain and defend any content they submit, regardless of whether AI tools were used in its creation. They should be educated on the importance of taking full ownership of their work, ensuring they deeply understand the material and can justify their submissions. To support this, academic policies should allow for viva examinations on any assessment, with reasonable adjustments made where necessary.
- Verification: Collaborative Verification Workshops
- Both Students and Lecturers: Implement workshops where both students and lecturers can practise verifying AI-generated content together. These sessions should focus on developing skills in cross-referencing AI outputs with traditional sources, ensuring that all participants are adept at discerning the accuracy and reliability of information produced by AI tools. Collaboration with institutions like the Warwick International Higher Education Academy (WIHEA) and the Academic Development Centre (ADC) can provide additional resources and frameworks for these workshops.
- Transparency: Promote Mutual Transparency Initiatives
- Both Students and Lecturers: Create joint initiatives where students and lecturers share their experiences with AI in a transparent manner. This could involve co-authored case studies or presentations on how AI was used in specific academic contexts. Such initiatives help establish norms around transparent AI usage and demonstrate the importance of honesty and openness in integrating AI into academic work.
- Ownership: Joint AI Experimentation Projects
- Both Students and Lecturers: Organise group projects that involve both students and lecturers working together on AI-based assignments or research. These projects should encourage participants to take full ownership of their roles, critically engage with AI outputs, and reflect on the ethical implications of their work.
Conclusion
By adhering to these regulations and recommendations, academic institutions can foster a responsible and transparent environment for AI usage. This approach ensures that AI tools are integrated in a way that enhances learning and upholds the integrity of academic work, guided by the central themes of verification, transparency, and ownership.
Throughout this project, we have thoroughly explored the complexities of integrating AI into mathematics and statistics education. Our detailed analyses and discussions, available in the full report and expanded upon on our Recommendations Page, address many specific issues and provide in-depth insights into the challenges and opportunities presented by AI.
These findings serve as a clear demonstration of the urgent need for action, ongoing research, and open conversation within the academic community. The rapid advancement of AI technologies necessitates a proactive and collaborative approach to ensure that educational practices evolve in step with technological capabilities.
We hope that this project acts as a starting point for educators, students, administrators, and policymakers to engage in meaningful dialogue and develop sustainable strategies for AI integration. By embracing the potential of AI while remaining vigilant about its challenges, we can enhance the educational experience and prepare students for a future where AI plays a significant role in professional and academic environments.
In conclusion, it is imperative that we continue to work together to refine these recommendations, adapt to new developments, and uphold the highest standards of academic integrity. Through collective effort and commitment to the principles of verification, transparency, and ownership, we can successfully navigate the complexities of AI in education and harness its potential for the benefit of all.
These themes—verification, transparency, and ownership—form the foundation of our commitment to ethical AI integration. They encapsulate an institutional ethos that not only embraces AI's potential but also upholds the highest standards of academic integrity. This is the guiding framework within which we must operate as we navigate the complexities of AI in education.