3. Student Attitudes and Acceptance of AI
- Home
- 1.Formal Report
- 1.1 Introduction to Project
- 1.2 The Emergence of ChatGPT and Limitations of GPT-3.5
- 1.3 Understanding LLMs and Evolution of AI Models
- 1.4 Extending LLM Capabilities and Introduction of ChatGPT o1
- 1.5 A Step Change in AI Capabilities and Key Findings
- 1.6 Performance of AI Models and Urgency for Institutional Action
- 1.7 Recognising the Problem and Specific Regulations
- 1.8 Recommendations and Conclusion
- 2. Student Conversations
- 3. How ChatGPT Performed on University-Level Work
- 4. Suggested Changes and Future Direction of Regulations
- 4.1 Developing Clear Policies on AI Use
- 4.2 Enhancing Student Support and Guidance
- 4.3 Emphasising Skills That AI Cannot Replicate
- 4.4 Adapting Pedagogy and Innovating Assessments
- 4.5 Encouraging Collaborative Solutions Among Stakeholders
- 4.6 Allocating Resources for Training and Support
- 4.7 Adopting Alternative Assessment Methods
- 4.8 Relying on Honour Codes and Academic Integrity Pledges
- 4.9 Designing AI-Resistant Assignments
- 4.10 Using AI Detection Software
- 4.11 Implementing Oral Examinations (VIVAs)
- 5 Opportunities AI Presents
- 6 Tips For Markers on Spotting Potential AI Usage
Overview
This section examines the attitudes and acceptance of AI among Mathematics and Statistics students at Warwick University, exploring willingness to pay for AI tools, perceived utility of AI as a co-pilot or tutor, and overall acceptance of AI in the academic environment. The analysis is based on survey responses from 145 students, 59% of whom reported having used AI for assignments.
Willingness to Pay for AI Tools
Key Findings:
- 81% of non-AI users disagree or strongly disagree with paying for AI tools, compared to 41% of AI users.
- 34% of AI users agree or strongly agree that they would pay for AI tools, while only 7% of non-AI users are willing to pay.
- 53% of non-AI users strongly disagree with paying for AI tools, compared to 12% of AI users.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
AI Integration in Future Assignments
Key Findings:
- 78% of non-AI users disagree or strongly disagree with integrating AI as a co-pilot in future assignments, compared to 44% of AI users.
- 38% of AI users agree or strongly agree that AI should be integrated as a co-pilot, compared to 15% of non-AI users.
- Only 4% of AI users and 3% of non-AI users strongly agree with integrating AI as a co-pilot in future assignments.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
Perceived Utility of AI for Academic Advice
Key Findings:
- 67% of AI users agree or strongly agree that AI is useful for academic advice, compared to 32% of non-AI users.
- 46% of non-AI users disagree or strongly disagree with the usefulness of AI for academic advice, while only 15% of AI users share this sentiment.
- 27% of non-AI users strongly disagree that AI is useful for academic advice, compared to only 2% of AI users.
- 25% of AI users strongly agree that AI is useful for academic advice, in contrast to 5% of non-AI users.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
AI as a Private Tutor
Key Findings:
- 64% of non-AI users disagree or strongly disagree that AI is useful as a private tutor, compared to 32% of AI users.
- 47% of AI users agree or strongly agree that AI is useful as a private tutor, while only 17% of non-AI users feel the same way.
- 42% of non-AI users strongly disagree that AI is useful as a private tutor, compared to 9% of AI users.
Note: Click on the graph labels (e.g., "AI Users", "Non-Users", or "Combined") to view each group's data separately.
Broader Analysis of Student Attitudes Towards AI
1. Significant Divide Between AI Users and Non-Users
The survey data highlights a significant divide in attitudes between AI users (those who have used AI in assignments) and non-users (those who have not). This division is consistently underscored across all questions in this section, with non-users showing much higher levels of strong disagreement. For instance, the highest rate of strong disagreement among non-users reaches 53% when asked about paying for AI tools, compared to just 12% of strong disagreement from AI users. Even at its lowest, the rate of strong disagreement among non-users is 27% when asked if "AI is useful for academic advice.", significantly higher than the 2% of AI users in the same context. These patterns suggest a broader resistance among non-users, which could stem from limited exposure to or trust in AI's role in academic work. However, it is important to note that the non-user category only guarantees the absence of AI use in assignments, not necessarily a complete lack of familiarity with AI tools in other contexts.
2. Strong Resistance to AI Integration in Academic Assignments
The data shows a strong resistance to the integration of AI into academic assignments, particularly among non-users. 78% of non-users disagree or strongly disagree with the idea of using AI as a co-pilot in assignments, compared to 44% of AI users, indicating some hesitancy even among those who have used AI in their coursework. Notably, very few respondents from either group express strong agreement with this idea (4% of AI users and 3% of non-users). This lack of strong agreement suggests a general reluctance towards allowing AI to play a more active role in academic tasks. The hesitancy could be rooted in concerns about fairness and academic integrity, particularly in assignments where AI's involvement might be seen as undermining the traditional learning process.
3. Contrasting Perceptions of AI's Utility for Academic Advice
There is a marked contrast in how AI users and non-users perceive AI’s utility for academic advice. A substantial 67% of AI users agree or strongly agree that AI is useful for academic advice, compared to only 32% of non-users. Conversely, 46% of non-users disagree or strongly disagree with AI’s usefulness in this context, compared to just 15% of AI users. This stark difference suggests that AI users, likely due to their experience using these tools in assignments, recognise potential benefits that non-users do not trust or perceive.
4. Diverging Opinions on AI as a Private Tutor
Attitudes towards AI's role as a private tutor also vary significantly between AI users and non-users. A large proportion of non-users (64%) disagree or strongly disagree that AI is useful as a private tutor, while 32% of AI users share this view. In contrast, 47% of AI users agree or strongly agree with AI's usefulness in this role, compared to only 17% of non-users. This suggests that AI users, who have already used these tools in their assignments, may be more open to exploring AI's potential in different educational roles, while non-users remain doubtful of AI's ability to provide personalised and effective tutoring.
5. Reluctance to Pay for AI Tools Despite Promised Accuracy
The survey reveals a pronounced reluctance among students to pay for AI tools, even when assumed to answer maths and stats questions accurately. A striking 81% of non-users disagree or strongly disagree with paying for AI tools, compared to 41% of AI users. Only 7% of non-users express willingness to pay, in contrast to 34% of AI users. This suggests that while experience with using AI tools for assignments increases their perceived value, overall willingness to pay for such tools remains low. The hesitancy, particularly among non-users, may reflect broader doubts about AI's reliability and added value in academic contexts, echoing concerns about academic integrity and the potential impact on learning outcomes. These findings imply that students' reluctance stems not just from cost considerations, but also from skepticism about AI's ability to meaningfully support their academic journey.
Conclusion and Recommendations
The data consistently demonstrates a deep divide in student attitudes towards AI, shaped significantly by their prior experience with the technology. Across all surveyed areas, non-users express marked scepticism and resistance to AI's role in their academic experience. For example, 81% of non-users are unwilling to pay for AI tools, even when these tools are assumed to provide accurate answers, and 78% oppose integrating AI as a co-pilot in future assignments. This pattern of reluctance extends to perceptions of AI’s utility for academic advice and as a private tutor, reflecting a broader distrust or discomfort with AI's potential impact on education.
In contrast, AI users tend to be more open to the potential benefits of AI but still demonstrate significant reservations about its role in critical academic areas. For instance, while 67% of AI users agree that AI is useful for academic advice and 47% find it valuable as a private tutor, a substantial 44% remain opposed to integrating AI as a co-pilot in future assignments, and 41% are unwilling to pay for AI tools. These mixed attitudes suggest that, even among AI users, there is considerable caution about fully embracing AI, particularly in roles that might significantly alter traditional academic practices. This aligns with earlier discussions on ethical considerations and academic integrity, where concerns about fairness, potential cheating, and the devaluation of degrees remain particularly significant.
These patterns suggest that the scepticism towards AI is not just a matter of cost or functionality but a reflection of deeper anxieties regarding its impact on learning quality, fairness, and academic standards. To address these concerns and bridge the gap between AI users and non-users, educational institutions need to foster greater trust, understanding, and clarity about AI's role in academia.
Specific Recommendations
The recommendations below are tailored to address the diverse attitudes towards AI usage across students, supporting open dialogue, transparency, and continuous evaluation:
- Enhance Education and Awareness Initiatives: Develop targeted programmes, including workshops and seminars, to increase familiarity with AI tools and clarify their ethical use in academic contexts. These initiatives should address both the concerns of AI users and non-users, promoting academic integrity while highlighting AI's benefits.
- Develop Flexible Integration Policies: Implement adaptable AI integration policies that respect both traditional and AI-supported assessment models. For example, offering a choice between AI-assisted and traditional assignments could help accommodate different comfort levels with AI, fostering inclusivity.
- Implement Regular Feedback and Evaluation Mechanisms: Establish continuous feedback loops with students and faculty to monitor perceptions and experiences regarding AI use. This feedback should inform policy adjustments to ensure fairness, transparency, and effectiveness.
- Facilitate Open Dialogue and Collaboration: Encourage ongoing dialogue among students, educators, and administrators about the role of AI in education. This could take the form of roundtable discussions, forums, or collaborative projects to bridge understanding gaps and build consensus on AI integration.
- Support Ongoing Research on AI's Long-Term Impact: Align institutional strategies with broader research efforts to explore AI's long-term effects on education. Understanding these implications will help anticipate future challenges and shape policies that safeguard academic standards and the value of degrees.