2. Impact on Academic Standing and Degree Value
- Home
- 1.Formal Report
- 1.1 Introduction to Project
- 1.2 The Emergence of ChatGPT and Limitations of GPT-3.5
- 1.3 Understanding LLMs and Evolution of AI Models
- 1.4 Extending LLM Capabilities and Introduction of ChatGPT o1
- 1.5 A Step Change in AI Capabilities and Key Findings
- 1.6 Performance of AI Models and Urgency for Institutional Action
- 1.7 Recognising the Problem and Specific Regulations
- 1.8 Recommendations and Conclusion
- 2. Student Conversations
- 3. How ChatGPT Performed on University-Level Work
- 4. Suggested Changes and Future Direction of Regulations
- 4.1 Developing Clear Policies on AI Use
- 4.2 Enhancing Student Support and Guidance
- 4.3 Emphasising Skills That AI Cannot Replicate
- 4.4 Adapting Pedagogy and Innovating Assessments
- 4.5 Encouraging Collaborative Solutions Among Stakeholders
- 4.6 Allocating Resources for Training and Support
- 4.7 Adopting Alternative Assessment Methods
- 4.8 Relying on Honour Codes and Academic Integrity Pledges
- 4.9 Designing AI-Resistant Assignments
- 4.10 Using AI Detection Software
- 4.11 Implementing Oral Examinations (VIVAs)
- 5 Opportunities AI Presents
- 6 Tips For Markers on Spotting Potential AI Usage
Overview
This section examines the concerns of Mathematics and Statistics students at Warwick University regarding AI's impact on the value and credibility of their degrees. Based on a survey of 145 students, with 59% reporting they have used AI for assignments, the data reveals distinct differences in perceptions between AI users and non-users. It highlights varying levels of concern about AI's potential to undermine degree value by making assignments easier to complete and posing challenges to traditional assessment methods. The analysis also explores broader apprehensions related to fairness, integrity, and trust in the context of AI's growing role in academic settings.
Is AI Devaluing Your Degree?
Key Findings:
- 83% of non-AI users believe AI use in assignments could undermine the value of their degree.
- 45% of AI users share this concern, although a significant minority (35%) disagree.
- 47% of non-AI users strongly agree with this sentiment, compared to just 12% of AI users, indicating more intense concern among non-AI users.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
The Impact of AI Accuracy on Degree Value Perception
Key Findings:
- There is broad agreement (63%) that if AI can always answer assignment questions correctly, it undermines the degree's value, with similar concern levels among both AI users (64%) and non-AI users (63%).
- 21% of AI users strongly agree compared to 36% of non-AI users, again suggesting non-AI users are more likely to perceive a significant threat to degree value.
- Disagreement overall remains relatively low at 22%, reflecting a shared concern across both groups about the risk of AI undermining degree value.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
Should Traditional Assignments Remain Unchanged Amid AI Advances?
Key Findings:
- 63% of non-AI users agree or strongly agree that assignments should remain unchanged despite concerns around academic integrity.
- 48% of AI users also support maintaining current assessment practices, though they continue to show a more flexible stance towards integrating AI tools.
- This stance is demonstrated by the notable proportion of AI users (33%) disagree with keeping assignments the same, reflecting a preference for adapting methods to include AI capabilities while maintaining academic standards.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
Broader Analysis of AI's Impact on Academic Standing and Degree Value
1. Strong Concerns About the Devaluation of Degrees
The survey data reveals a pronounced concern among non-AI users about the potential devaluation of their degrees due to AI usage, with 83% expressing that AI in assignments could undermine degree value. This concern is intensified by the finding that nearly half (47%) of non-AI users strongly agree with this sentiment, compared to just 12% of AI users. The significant difference in levels of strong and overall agreement indicates that non-AI users view AI as a more substantial threat to the integrity of their academic achievements, possibly due to a lack of familiarity or trust in the technology's role in education.
2. Contrasting Reactions to AI’s Capability to Answer Correctly
Both AI and non-AI users express concern that AI could undermine the value of their degree, with 63% agreeing overall. Among AI users, concern rises from 45% to 64% when specifically considering AI's ability to solve assignments accurately. This suggests that AI users are particularly worried about AI making academic work too easy, potentially diminishing the perceived value of effort and achievement.
However, for non-AI users, degree devaluation concern decreases from 83% to 63% when it is assumed AI can solve maths and stats questions accurately. This decline indicates that while non-AI users are consistently more concerned about AI's general impact on degree value, they may be less worried about AI's ability to answer assignments correctly. This suggests that non-AI users are more broadly apprehensive about AI's overall influence on education, rather than its specific capabilities in providing accurate answers.
3. Impact of AI on Assignment Methods
The survey question "Assignments should stay as they are. Students should know that using AI tools like ChatGPT devalues their university education" is deliberately framed to assume that using AI tools like ChatGPT diminishes the value of degrees, in order to elicit strong reactions and explore the depth of concerns regarding the future of traditional take-home assignments among students. Among non-AI users, 63% agree or strongly agree that assignments should remain unchanged. Disagreement within this group is relatively low, with only 17% opposing the preservation of current methods. This suggests that non-AI users see value in retaining traditional assignments, even in the face of challenges in upholding academic standards and potential devaluation caused by AI's influence.
In contrast, AI users present a more varied perspective. While 48% support keeping assignments unchanged, a significant 33% disagree, indicating a stronger inclination towards adapting assessment methods in response to AI advancements. This division among AI users reflects a recognition of both the potential challenges posed by AI and the opportunities it offers for innovation in educational practices.
4. Balancing Support for AI-Proofing Strategies and Traditional Methods
The data suggests a nuanced stance across both groups when it comes to balancing AI-proofing strategies with the maintenance of traditional assignments. Among non-AI users, 63% prefer to keep assignments unchanged, while a slightly smaller majority 59% support measures to prevent AI misuse. This balance indicates that while non-AI users generally lean towards preserving traditional assessment methods, they are also open to implementing safeguards that reinforce these methods against potential threats posed by AI.
For AI users, support is evident although at lower levels for both maintaining current methods (48%) and favouring AI-proofing strategies (48%). However, with 33% of AI users disagreeing with keeping assignments unchanged, there is a clear inclination towards flexibility and adaptation. This reflects a broader trend among AI users to balance innovation with caution, recognising both the opportunities presented by AI and the need for thoughtful adaptation in assessment practices.
Conclusion and Recommendations
The data shows significant concern among students about AI's potential to devalue their degrees, with 60% of respondents believing that AI tools could negatively impact the value of academic qualifications. This concern only slightly rises to 63% when considering AI's ability to consistently answer assignments correctly, suggesting that many students are already apprehensive about AI's impact, regardless of future technological improvements. Among non-AI users, 83% worry that AI could undermine their degree's value, compared to 45% of AI users, highlighting both immediate concerns and a deeper mistrust of AI’s role in education particularly among those not already using AI in their assignment work.
Despite these worries, 54% of respondents prefer to keep assignments unchanged, reflecting hesitation to make rapid changes to traditional assessment methods without compelling evidence. Meanwhile, 53% support exploring AI-proofing measures to protect degree integrity, suggesting a need for strategies that address these concerns without radically altering current practices.
To navigate these mixed views, institutions should foster open dialogue and conduct further research to better understand student concerns, particularly as evidence shows that familiarity with AI can alleviate some fears but not eliminate them. Developing data-driven trials and experiments to explore new approaches to assignments will be crucial to identifying the line where AI integration supports learning without compromising degree value. This balanced approach will help build confidence, enhance AI literacy, and adapt educational practices to the evolving capabilities of AI, while recognising the unique challenges faced in fields like Mathematics and Statistics.
Specific Recommendations
- Evaluate the Impact of AI Tools on Academic Assignments: Institutions should systematically assess how state-of-the-art (SOTA) Generative AI tools interact with current assignment formats. This involves:
- Conducting controlled tests where AI tools are used to complete a sample of existing assignments, measuring the accuracy, relevance, and quality of AI-generated responses (see our analysis for Mathematics and Statistics here).
- Comparing AI-generated answers to those produced by students and to the standards expected by lecturers, identifying any significant discrepancies in reasoning, methodology, or quality.
- Using insights from these comparisons to revise assignment formats, ensuring that they require critical thinking, originality, and problem-solving skills that are less likely to be replicated by AI.
- Regularly updating these assessments to keep pace with advancements in AI capabilities, continuously refining educational strategies to safeguard academic integrity.
- Develop and Implement Clear Guidelines on AI Use: Create and enforce specific policies for AI use in assignments. These guidelines should:
- Define what constitutes acceptable and unacceptable use of AI tools in academic work.
- Provide concrete examples of acceptable AI assistance versus misuse to clarify expectations for students.
- Ensure these guidelines are communicated effectively to both students and faculty to maintain consistency and fairness.
- Encourage Open Dialogue: Promote ongoing discussions between students and educators about AI’s role in assignments. This should involve:
- Regular meetings or forums where students can voice their concerns and questions about AI use.
- Educational sessions for faculty on how to address AI-related issues and maintain academic standards.
- Developing a shared understanding of AI's impact on academic integrity and aligning expectations across the institution.
- Evaluate and Adapt AI-Proofing Strategies: Continuously review and improve AI-proofing measures based on performance data. This includes:
- Conducting periodic reviews of how AI tools perform on academic assignments to identify any gaps in existing strategies for detecting an overreliance on AI in academic work.
- Adjusting AI-proofing measures, such as assignment design and assessment methods, based on these reviews.
- Incorporating feedback from both students and faculty to refine strategies and ensure they remain effective against evolving AI capabilities.
- Promote Further Research: Invest in research to understand the long-term effects of AI on academic qualifications. This research should:
- Examine how AI affects students' perceptions of academic value and integrity over time.
- Inform the development of future policies and practices that balance AI integration with maintaining academic standards.
- Guide the continuous adaptation of assessment methods and AI-proofing strategies to keep pace with technological advancements.
By implementing these recommendations, educational institutions can address concerns about the devaluation of degrees due to AI while maintaining traditional assignment structures as they investigate a clearer picture of AI's impact on education.