Skip to main content Skip to navigation

How trust networks shape students' opinions about the proficiency of artificially intelligent assistants

Project Overview

The document explores the transformative role of generative AI in education, emphasizing its influence on students' perceptions and the adoption of AI tools. It identifies that trust networks, shaped by peer pressure and relationships, significantly impact how students view their proficiency with AI technologies, leading to varied learning outcomes that can range from effective engagement to disruptive experiences. The paper outlines key applications of generative AI, such as lesson planning, grading, and fostering critical thinking, showcasing its potential to streamline educational processes and enhance learning outcomes. However, it also addresses challenges, including ethical considerations and the necessity for teacher training to ensure effective integration of AI tools into the education system. Overall, the findings suggest that while generative AI holds promise for advancing education, careful navigation of trust dynamics and ethical implications is essential for maximizing its benefits and minimizing disruptions in the learning environment.

Key Applications

AI-assisted Grading and Feedback Tools

Context: Utilized in classroom settings for high school and university students, as well as in online higher education discussions, to automate grading and provide feedback on assessments and student contributions.

Implementation: Teachers and institutions deploy AI tools to assist in grading and providing feedback on assessments, including automating the grading of online discussions. AI algorithms are developed and applied to evaluate student contributions and provide consistent feedback.

Outcomes: Increased grading efficiency and consistency in assessment, enabling teachers to save time while providing feedback. However, there may be variations in students' perceptions of AI proficiency, and concerns about the accuracy and fairness of AI grading.

Challenges: Students may overestimate or underestimate the proficiency of AI tools based on peer opinions. Additionally, there are concerns about the integrity of assessments and the potential for misuse of AI tools.

Generative AI Tools for Research, Assignment Completion, and Lesson Planning

Context: Used across various educational levels for research, generating assignment responses, and enhancing lesson planning in teacher education programs.

Implementation: Students and teachers use AI tools to assist in research or generate answers for assignments and to enhance lesson planning. Generative AI is integrated into teacher education curricula to improve lesson planning efficiency and critical thinking.

Outcomes: AI tools can enhance productivity among students and improve lesson planning for teachers, leading to better critical thinking in lesson design. However, over-reliance on these tools may raise issues of academic integrity.

Challenges: Concerns about the integrity of assessments, potential misuse of AI for minimal effort, and the need for training teachers to effectively use AI tools.

Probabilistic Opinion Dynamics Models for Trust in AI

Context: Used to simulate how students form beliefs about AI tools within trust networks, impacting learning outcomes.

Implementation: Monte Carlo simulations are employed to analyze how trust relationships among students affect their learning outcomes regarding AI proficiency.

Outcomes: Insights into how trust dynamics can lead to rapid or slow learning outcomes concerning AI proficiency.

Challenges: Complex dynamics with potential for misinformed opinions to spread rapidly in low-trust environments.

Chatbots for Student Support

Context: Utilized across various educational settings to assist students with inquiries and learning resources.

Implementation: Implementing chatbot systems to provide immediate support for students, answering queries and directing them to resources.

Outcomes: Enhanced student engagement and immediate availability of support.

Challenges: Ensuring the chatbot understands and responds correctly to diverse student queries.

Implementation Barriers

Perception and Trust barrier

Students may have differing perceptions of AI proficiency due to peer influence, and low trust networks can lead to erroneous conclusions about AI tools.

Proposed Solutions: Encouraging open discussions about AI tool efficacy, building trust through collaborative projects, and providing unbiased assessments in classroom settings.

Equity barrier

Access to AI tools can deepen educational inequalities.

Proposed Solutions: Facilitating equitable access to AI tools and promoting collaborative learning environments.

Technological Barrier

Limited access to technology and AI tools in some educational institutions.

Proposed Solutions: Investment in infrastructure and resources to ensure equitable access.

Ethical Barrier

Concerns around data privacy, bias, and fairness in AI applications.

Proposed Solutions: Establishing clear guidelines and ethical frameworks for AI use in education.

Training Barrier

Teachers may lack the necessary training to effectively utilize AI tools.

Proposed Solutions: Providing professional development and training programs focused on AI integration.

Project Team

Yutong Bu

Researcher

Andrew Melatos

Researcher

Robin Evans

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Yutong Bu, Andrew Melatos, Robin Evans

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies