7. Recognising the Problem and Specific Regulations
- Home
- 1.Formal Report
- 1.1 Introduction to Project
- 1.2 The Emergence of ChatGPT and Limitations of GPT-3.5
- 1.3 Understanding LLMs and Evolution of AI Models
- 1.4 Extending LLM Capabilities and Introduction of ChatGPT o1
- 1.5 A Step Change in AI Capabilities and Key Findings
- 1.6 Performance of AI Models and Urgency for Institutional Action
- 1.7 Recognising the Problem and Specific Regulations
- 1.8 Recommendations and Conclusion
- 2. Student Conversations
- 3. How ChatGPT Performed on University-Level Work
- 4. Suggested Changes and Future Direction of Regulations
- 4.1 Developing Clear Policies on AI Use
- 4.2 Enhancing Student Support and Guidance
- 4.3 Emphasising Skills That AI Cannot Replicate
- 4.4 Adapting Pedagogy and Innovating Assessments
- 4.5 Encouraging Collaborative Solutions Among Stakeholders
- 4.6 Allocating Resources for Training and Support
- 4.7 Adopting Alternative Assessment Methods
- 4.8 Relying on Honour Codes and Academic Integrity Pledges
- 4.9 Designing AI-Resistant Assignments
- 4.10 Using AI Detection Software
- 4.11 Implementing Oral Examinations (VIVAs)
- 5 Opportunities AI Presents
- 6 Tips For Markers on Spotting Potential AI Usage
Recognising the Problem
Most critically, we must begin by recognising and clearly stating the problem: AI technologies have advanced to a level where they can significantly affect the integrity of academic assessments across disciplines, including both quantitative fields like mathematics and statistics and essay-based subjects. The challenges posed by AI are complex and multifaceted, with no simple or quick solutions. It is imperative that we openly discuss and explore these issues, engaging all stakeholders—students, educators, administrators, and policymakers—in comprehensive dialogue.
A holistic approach is necessary, one that provides the space and resources for institutions to adapt and respond effectively. This includes investing in further research to deepen our understanding of AI's capabilities, limitations, and implications for teaching and assessment practices. Given the large institutional changes required, collaboration across departments and institutions is crucial. Many of the concerns and challenges identified in this study are echoed in other disciplines, underscoring the need for a unified and coordinated response.
Despite the urgency, we must acknowledge that developing effective responses will take time and that easy answers may not be readily available. Therefore, an informed, collaborative approach is essential. By working together, leveraging collective expertise, and committing to ongoing evaluation and adaptation, educational institutions can navigate these challenges with clinical precision and care. This concerted effort will help ensure that the integration of AI serves to enhance education while upholding the highest standards of academic integrity and preserving the value of academic qualifications.
Specific Regulations and Recommendations for AI Use in Academic Settings
Building upon the core themes of verification, transparency, and ownership, we have developed specific regulations and recommendations to guide the ethical and effective use of AI tools in academia. These guidelines are designed to address the nuanced challenges of AI integration, ensuring they are both actionable and adaptable across different departments and modules.
Regulations
- Assignment Structure: AI Pre-check Requirement
- Lecturers: Lecturers should run assignments through AI tools before distributing them to identify potential issues, such as overly predictable answers or opportunities for AI misuse. Assignments should be adjusted based on these insights to minimise AI dependency. This proactive approach helps maintain the integrity of assessments.
- Teaching with AI: Available AI Training
- Lecturers: Lecturers should undergo regular training to stay updated on AI tools and understand their applications in education. Training should cover how students use AI, methods to critically assess AI outputs, and strategies for integrating AI into teaching without compromising academic integrity. This ensures lecturers are equipped to guide students effectively.
- Maintaining Integrity: Handling AI Misuse
- Lecturers: Lecturers must have clear, consistent protocols for handling suspected AI misuse in student work. These protocols ensure that any cases of AI misuse are addressed uniformly across the institution, maintaining academic standards. Defining nuances for lecturers is important to ensure fair and appropriate responses.
- Assignment Structure: Documentation of AI Use
- Students: Documentation of AI Use: Heavy Reliance on AI: If a student relies heavily on AI output for content directly included in an assignment, they must save screenshots of these interactions. These screenshots must be submitted with the assignment as references or appendices, explaining how and why the AI output was used. General Use for Background Research: If AI is used for general background research or insights, students should retain the chat logs. Keep good notes as anyone should. Disclose use in general terms somewhere between the end of the piece and the references section.
- Teaching with AI: Clear AI Usage Boundaries
- Students: Clear AI Usage Boundaries: Explicit rules must be provided on what AI use is permissible in assignments and exams. Unauthorised AI usage should be clearly defined as academic misconduct, with concrete examples given to students at the start of each course. It is okay and encouraged to critically analyse AI output with students. Absolutely key to know they understand. Clear and unambiguous instructions to students are vital.
- Maintaining Integrity: Verification Requirement
- Students: Students are responsible for verifying any AI-generated content they use. They must cross-check AI outputs against original sources and be prepared to explain and justify their work in detail, ensuring they fully understand and own their submissions. Important. This should be part of instructions on assignments.