1. Experiences and Attitudes Towards AI
- Home
- 1.Formal Report
- 1.1 Introduction to Project
- 1.2 The Emergence of ChatGPT and Limitations of GPT-3.5
- 1.3 Understanding LLMs and Evolution of AI Models
- 1.4 Extending LLM Capabilities and Introduction of ChatGPT o1
- 1.5 A Step Change in AI Capabilities and Key Findings
- 1.6 Performance of AI Models and Urgency for Institutional Action
- 1.7 Recognising the Problem and Specific Regulations
- 1.8 Recommendations and Conclusion
- 2. Student Conversations
- 3. How ChatGPT Performed on University-Level Work
- 4. Suggested Changes and Future Direction of Regulations
- 4.1 Developing Clear Policies on AI Use
- 4.2 Enhancing Student Support and Guidance
- 4.3 Emphasising Skills That AI Cannot Replicate
- 4.4 Adapting Pedagogy and Innovating Assessments
- 4.5 Encouraging Collaborative Solutions Among Stakeholders
- 4.6 Allocating Resources for Training and Support
- 4.7 Adopting Alternative Assessment Methods
- 4.8 Relying on Honour Codes and Academic Integrity Pledges
- 4.9 Designing AI-Resistant Assignments
- 4.10 Using AI Detection Software
- 4.11 Implementing Oral Examinations (VIVAs)
- 5 Opportunities AI Presents
- 6 Tips For Markers on Spotting Potential AI Usage
Experiences and Attitudes Towards AI Among Mathematics and Statistics Students
This section explores the diverse experiences and attitudes towards Artificial Intelligence (AI) among Mathematics and Statistics students at Warwick University. Drawing on focus group discussions and, where essential, supporting survey data, we examine how students perceive AI tools in their academic journey, highlighting both positive and negative sentiments from AI users and non-users. The insights provide a nuanced understanding of the role AI currently plays in education and how it is shaping student experiences.
Focus Groups: Two focus groups with six students each, conducted in June 2024, were divided into:
- Group 1 (AI Users): Students who use AI tools in their academic work. Responses from this group are referenced using letters (e.g., Student A, Student B).
- Group 2 (Non-AI Users): Students who do not use AI tools. Responses from this group are referenced using numbers (e.g., Student 1, Student 2).
This division ensures a balanced discussion and captures diverse perspectives without perceived conflict.
Group 1: AI Users
Positive Experiences with AI
Students who have used AI tools often express appreciation for the support these technologies offer in understanding complex concepts and enhancing their learning experience.
Enhancing Understanding and Learning Efficiency
"I think that AI or ChatGPT in specific tasks actually helped me enhance my learning experience. One thing that it has an edge over asking your professors is the ability to clarify things that you don't really understand in the moment. For example, when I've been reading through my lecture notes and noticed a contradiction, I can scrutinise ChatGPT's answer line by line and ask it again, like why is it doing this? So I think that ChatGPT is really good in the sense that I can get a fast, real-time response when I don't really understand something." — Student D
Student D values AI for providing immediate, interactive explanations that enhance understanding of complex material. The ability to engage in iterative questioning with AI tools facilitates deeper comprehension outside of classroom settings. This suggests that AI can effectively supplement traditional learning resources by offering accessible support when professors are not available.
Assisting with Coding and Study Planning
"It's useful for coding; if you get an error message, you can just copy and paste it into ChatGPT and it'll give you an idea of how to correct it. It's really good at analysing lecture notes. We got lecture notes for one module that's 200 pages long; it's quite useful at filtering out the information I need. This year I tried using ChatGPT to tell me what to do each day... it gave me a balance for the 11 exams that I had and helped me prepare for it. So I find that quite useful." — Student A
Student A utilises AI to enhance efficiency in coding and organising study routines. AI tools assist in debugging code by quickly providing suggestions to correct errors, which streamlines the coding process. Additionally, AI aids in processing extensive lecture materials by filtering out essential information, and helps in creating structured revision plans. This multifaceted use of AI indicates its potential to support students in managing complex academic workloads.
Improving Problem-Solving Efficiency
"I think it's served as quite a useful tool to replace Googling things... ChatGPT maybe gives you a method that helps you find it a little bit faster... it'll give you that little tip you need in the question to get to the next part." — Student C
Student C finds AI tools beneficial in expediting the problem-solving process. Instead of spending extensive time searching for information online, AI provides quick access to methods or hints that can help overcome obstacles in assignments. This efficiency allows students to focus more on understanding and applying concepts rather than on the search process itself.
Negative Experiences with AI
Despite using AI tools, some students express dissatisfaction with their performance, particularly in mathematical contexts.
Limitations in Mathematical Problem-Solving
"ChatGPT, it's hilariously bad at maths. It's very rarely provided anything more useful than just guessing and checking." — Student F
"It's hopeless at answering any of my assignment questions... most stats assignments, it's pretty poor." — Student A
Both Student F and Student A highlight significant limitations of AI tools in handling advanced mathematical problems. The AI's inability to accurately solve mathematical assignments undermines its usefulness for students in math-intensive courses. This frustration stems from AI models like GPT-3.5, which exhibit poor mathematical capabilities. Even GPT-4 level models, such as GPT-4o available at the time, showed inconsistent performance in mathematics and statistics. This inconsistency erodes trust in AI's reliability for complex problem-solving, leading students to rely on traditional methods or their own understanding instead.
Group 2: Non-AI Users
Negative Experiences and Attitudes Towards AI
Students who have not used AI tools often express strong reservations, citing personal experiences or general distrust in AI's capabilities.
Disillusionment After Initial Trials
"I experimented a little bit with AI last year to try and help me understand some of the stuff that was going on, but I quickly realised that I'm a lot better at maths than what ChatGPT's putting out. It gave me some real nonsense, and ever since then I've just been like, well, there's no point really." — Student 6
"I tried it a little bit, and it didn't really work. The only things I could ever get it to do correctly were like if it could write the proofs for very well-known or named theorems... It couldn't do anything that wasn't extremely well-known. Anything else, it just couldn't work." — Student 3
Both Student 6 and Student 3 found AI tools inadequate for their academic needs, particularly in solving complex or less common mathematical problems. Their initial experiences with AI resulted in outputs that were either nonsensical or limited to well-known theorems, which did not meet the demands of their coursework. This inefficacy discouraged further use and reinforced their perception that AI tools are not valuable for their studies.
Ethical Concerns and Dislike of AI
"I've never touched any of the AI stuff. I've always had a dislike of it... I feel it's cheating... there's no way to catch someone with it. So it feels like there's sort of no winning with it. Some people will just use it and get away with it." — Student 2
Student 2's aversion to AI is rooted in ethical concerns about academic integrity. The belief that using AI constitutes cheating, coupled with the challenge of detecting AI-assisted work, fuels distrust and rejection of AI tools. This perspective underscores the importance of clear guidelines and policies to address ethical issues surrounding AI use in academia.
Concerns About Equity and Accessibility
"If AI does in the future get loads better and is actually useful at solving our assignments, I don't think it would be free to access. I think it'd be behind some massive paywall, and then it would create a situation where only students from really high wealth backgrounds are then able to access that, and they'd have an extra leg up... I wouldn't want to see that disrupted by something like this." — Student 6
Student 6 expresses concern that advancements in AI could exacerbate socioeconomic disparities among students. If effective AI tools become costly, only those who can afford them will benefit, potentially widening the achievement gap. This concern is especially relevant given the current landscape, where the most advanced models, such as OpenAI's o1 preview, are limited by strict rate caps, even for paying users. Those with the financial means to access or maintain multiple accounts could gain a significant advantage. Notably, in reasoning-heavy benchmarks, o1 rivals human experts, achieving remarkable performance on exams like the 2024 AIME, where it significantly outperformed earlier models like GPT-4o, solving up to 93% of problems when using advanced sampling techniques. This level of performance, if restricted to wealthier students, could indeed disrupt the academic playing field by providing disproportionate benefits to those with greater financial resources.
In-Depth Analysis of Experiences and Attitudes
Understanding the Divergent Perspectives
The focus group discussions reveal a clear divergence in experiences and attitudes between AI users and non-users, shaped by their interactions with AI tools and their perceptions of AI's capabilities and limitations.
Positive Engagement Among AI Users
AI users generally view AI tools as valuable supplements to their learning. They leverage AI for immediate assistance in areas such as coding, study planning, and understanding complex concepts. The interactive nature of AI tools allows them to engage with material in a personalised manner, enhancing their overall learning experience. For instance, students appreciate the ability to receive instant feedback and clarification on topics they find challenging, which may not always be possible through traditional academic support channels.
Challenges with Mathematical Problem-Solving
Despite the benefits, both AI users and non-users highlight significant shortcomings of AI tools in mathematical and statistical problem-solving. This area is particularly critical for Mathematics and Statistics students, and AI's limitations here greatly influence their overall perception of its usefulness. The widespread belief that AI tools are ineffective in these fields aligns with the documented performance issues of GPT-3.5, the most commonly used model during the focus groups. GPT-3.5 struggled with mathematical reasoning, lacked native code execution capabilities, and often failed in maintaining logical consistency. Even with the introduction of GPT-4 models, which exhibited some improvements, performance remained inconsistent, leading to continued uncertainty about their reliability.
These inconsistencies result in frustration and distrust among students, as they cannot rely on AI tools to consistently deliver correct answers or methodologies for their assignments. The risk of obtaining incorrect solutions without the ability to verify them independently diminishes the appeal of using AI for mathematical tasks. This aligns with the experiences shared by students like Student F and Student A, who found AI tools inadequate for their coursework.
Impact of Initial Experiences on Non-AI Users
For non-AI users, initial unsuccessful attempts to use AI tools reinforced their belief that AI is inadequate for their academic needs. The inability of AI to solve advanced or non-standard problems pertinent to their coursework confirmed their scepticism and deterred further exploration of AI's potential benefits. Students like Student 6 and Student 3 felt that AI tools did not offer any significant advantage over their own problem-solving abilities, leading to disinterest in integrating AI into their academic practices.
Ethical Concerns and Academic Integrity
Ethical considerations are paramount among non-AI users, who fear that AI use constitutes cheating and undermines academic integrity. The difficulty in detecting AI-assisted work exacerbates these concerns, as they worry about unfair advantages and the potential devaluation of academic achievements. This perspective, as expressed by Student 2, highlights the need for clear institutional policies and guidelines regarding the acceptable use of AI in academic settings. Addressing ethical apprehensions is essential to fostering a transparent and fair educational environment where students feel confident in the integrity of their assessments.
Accessibility and Equity Issues
Students express apprehension about the future accessibility of advanced AI tools. If effective AI resources become expensive or restricted, there is a risk of widening the socioeconomic gap among students. Those who cannot afford access may be at a disadvantage, potentially impacting their academic performance and opportunities. Ensuring equitable access to AI tools is crucial to prevent exacerbating existing inequalities. Institutions may need to consider providing access to AI resources or supporting programmes that enable all students to benefit from technological advancements, as suggested by concerns from Student 6.
Importance of AI Literacy and Awareness
The varying experiences and attitudes suggest that AI literacy plays a significant role in shaping perceptions. However, it is critically important to recognise that the effectiveness of AI tools is highly dependent on the specific model being used, a nuance that few students fully appreciated or discussed. Students who understand not only the general capabilities and limitations of AI but also the distinct strengths and weaknesses of different AI models are better equipped to utilise them effectively and set realistic expectations. Enhancing AI literacy through education and training can empower students to make informed decisions about integrating AI into their academic work. For example, students who recognise that AI excels in certain tasks but may falter in complex mathematical problem-solving, depending on the model, can adjust their use accordingly.
Conclusion and Recommendations
The focus group discussions reveal a complex landscape of experiences and attitudes towards AI among Mathematics and Statistics students. While AI users appreciate the benefits of AI tools in enhancing learning efficiency and providing immediate assistance, limitations in mathematical problem-solving and ethical concerns hinder broader acceptance. Non-AI users are particularly influenced by initial negative experiences and apprehensions about academic integrity and equity.
Recommendations:
- Develop Clear Ethical Guidelines:
- Establish university-wide policies on acceptable AI use, explicitly addressing concerns about cheating and academic integrity.
- Provide guidance on how AI tools can be used responsibly in academic work, clarifying what constitutes appropriate assistance versus misconduct.
- Promote AI Literacy and Critical Evaluation Skills:
- Offer workshops and resources to educate students about AI capabilities, limitations, and best practices for use.
- Teach students how to critically assess AI-generated content, verify solutions independently, and understand when AI assistance is beneficial or potentially misleading.
- Ensure Equitable Access to AI Tools:
- Provide access to advanced AI tools through institutional subscriptions or support programmes to prevent socioeconomic disparities.
- Consider subsidising access for students who may face financial barriers, ensuring all students have the opportunity to utilise AI resources.
- Foster Open Dialogue and Engagement:
- Create forums for students to share experiences, discuss concerns, and exchange best practices related to AI use.
- Engage both AI users and non-users to bridge understanding, address misconceptions, and collaboratively explore the role of AI in education.
- Monitor and Evaluate AI Integration in Education:
- Regularly assess the impact of AI on learning outcomes and adapt teaching methods accordingly.
- Incorporate AI in ways that enhance learning without undermining critical thinking and problem-solving skills, ensuring that AI serves as a tool rather than a crutch.
By addressing these recommendations, the university can support students in navigating the evolving role of AI in education, maximising its benefits while mitigating challenges. This approach aims to foster an inclusive academic environment where AI serves as a tool to enhance learning for all students.