2. Ethical Considerations and Academic Integrity
- Home
- 1.Formal Report
- 1.1 Introduction to Project
- 1.2 The Emergence of ChatGPT and Limitations of GPT-3.5
- 1.3 Understanding LLMs and Evolution of AI Models
- 1.4 Extending LLM Capabilities and Introduction of ChatGPT o1
- 1.5 A Step Change in AI Capabilities and Key Findings
- 1.6 Performance of AI Models and Urgency for Institutional Action
- 1.7 Recognising the Problem and Specific Regulations
- 1.8 Recommendations and Conclusion
- 2. Student Conversations
- 3. How ChatGPT Performed on University-Level Work
- 4. Suggested Changes and Future Direction of Regulations
- 4.1 Developing Clear Policies on AI Use
- 4.2 Enhancing Student Support and Guidance
- 4.3 Emphasising Skills That AI Cannot Replicate
- 4.4 Adapting Pedagogy and Innovating Assessments
- 4.5 Encouraging Collaborative Solutions Among Stakeholders
- 4.6 Allocating Resources for Training and Support
- 4.7 Adopting Alternative Assessment Methods
- 4.8 Relying on Honour Codes and Academic Integrity Pledges
- 4.9 Designing AI-Resistant Assignments
- 4.10 Using AI Detection Software
- 4.11 Implementing Oral Examinations (VIVAs)
- 5 Opportunities AI Presents
- 6 Tips For Markers on Spotting Potential AI Usage
This section delves into the ethical considerations and concerns about academic integrity among Mathematics and Statistics students at Warwick University regarding the use of Artificial Intelligence (AI) in their academic work. By analysing focus group discussions and survey data, we explore students' views on whether using AI constitutes cheating, their worries about fairness, and how they believe AI might impact the value of their degrees.
Focus Groups: Two focus groups with 6 students each, conducted in June 2024, were divided into:
- Group 1 (AI Users): Students who use AI tools in their academic work. Responses from this group are referenced using letters (e.g., Student A, Student B).
- Group 2 (Non-AI Users): Students who do not use AI tools. Responses from this group are referenced using numbers (e.g., Student 1, Student 2).
This division ensures a balanced discussion and captures diverse perspectives without perceived conflict.
Group 1: AI Users
Views on Cheating
Internal Conflict
"I don't condone the use of it for assignments because... it rarely gives the correct answers anyway." — [Student D]
Supporting Survey Data:
- 41% of AI users agree or strongly agree that using AI for assignments is cheating, showing internal conflict about ethical use despite personal experience with AI tools. (Source)
Perception of AI's Limitations
"I don't really think I can consider it cheating per se because it just doesn't really give you answers." — [Student C]
Contradictory Survey Data:
- 25% of AI users disagree or strongly disagree that using AI for assignments is cheating, indicating that a portion do not view AI use as unethical due to its limitations. (Source)
Copying vs. Learning
"If you're copying and pasting an answer from ChatGPT, that's clearly wrong... morally, that's cheating." — [Student A]
Supporting Survey Data:
- 48% of AI users agree or strongly agree that assignments should be regularly updated to prevent AI misuse, reflecting concerns about maintaining academic integrity. (Source)
Group 2: Non-AI Users
Concerns About Cheating
Strong Ethical Objections
"I dislike the use of AI. I just feel it's cheating." — [Student 2]
Supporting Survey Data:
- 76% of non-AI users agree or strongly agree that using AI for assignments is cheating, demonstrating strong ethical objections. (Source)
Fairness and Detection Issues
"Some people would just use it and get away with it... there's no way to catch someone with it." — [Student 2]
Supporting Survey Data:
- 59.32% of non-AI users agree or strongly agree that assignments should be regularly updated to prevent AI misuse, indicating concern over the inability to detect AI-assisted cheating. (Source)
Cheating Beyond AI
"People who are going to cheat, they're going to cheat... it's just another tool that's out there." — [Student 4]
Supporting Survey Data:
- 63% of non-AI users agree or strongly agree that assignments should stay as they are, suggesting a belief that academic integrity issues are not solely due to AI. (Source)
In-Depth Analysis of Ethical Considerations
1. Divergent Views on Cheating Among AI Users
Within AI users, there is an internal conflict regarding whether using AI constitutes cheating. Some acknowledge that copying answers directly from AI tools is unethical, while others believe that due to AI's limitations in providing correct answers, its use doesn't equate to cheating. This highlights the need for clearer guidelines on acceptable AI use.
2. Strong Ethical Stance Among Non-AI Users
Non-AI users predominantly view the use of AI in assignments as cheating, expressing concerns about fairness and the potential for undetected academic dishonesty. This group supports measures to prevent AI misuse, such as regularly updating assignments, to maintain academic integrity.
3. Concerns Over Fairness and Detection
Both AI users and non-users are worried about the fairness implications of AI use. The inability to detect AI-assisted work raises concerns about some students gaining unfair advantages, which could undermine the value of academic qualifications.
4. The Need for Clear Institutional Policies
The differing views on what constitutes cheating with AI tools suggest a lack of consensus and clarity. This underscores the importance of universities establishing clear policies and guidelines to ensure students understand the ethical boundaries regarding AI use in their academic work.
Conclusion and Recommendations
The ethical considerations surrounding AI use in academic settings are complex, with students expressing varied opinions on whether it constitutes cheating. While some AI users believe that responsible use does not infringe on academic integrity, non-users largely perceive any use of AI in assignments as unethical. Concerns about fairness and the potential devaluation of degrees due to undetected AI-assisted cheating are prevalent.
Recommendations:
- Establish Clear Guidelines:
- Policy Development: Universities should develop and communicate explicit policies on acceptable AI use in academic work to eliminate ambiguity.
- Educate Students on Ethical Use:
- Workshops and Seminars: Offer educational programs to inform students about the ethical implications of AI use and how to utilise these tools responsibly.
- Update Assessment Methods:
- Assignment Design: Regularly update assignments to reduce the likelihood of AI-generated answers being effective, promoting original thought.
- Promote Academic Integrity:
- Honour Codes: Reinforce the importance of academic honesty through honour codes and integrity pledges.
- Enhance Detection Mechanisms:
- Technology Solutions: Invest in tools and techniques to detect AI-generated content where feasible.
- Foster Open Dialogue:
- Discussion Forums: Create spaces for students and faculty to discuss concerns and perspectives on AI use, promoting mutual understanding.
By implementing these recommendations, educational institutions can address ethical concerns, promote fairness, and uphold academic integrity in the era of AI-assisted learning.