What Explains Teachers' Trust of AI in Education across Six Countries?
Project Overview
The document examines the role of artificial intelligence-based educational technology (AI-EdTech) in K-12 education, emphasizing the critical factor of teachers' trust in these technologies. It reveals that trust is shaped significantly by teachers' self-efficacy, their comprehension of AI, and contextual factors such as cultural values and geographic location. Key applications of AI-EdTech include enhancing learning outcomes and improving teaching methodologies, yet concerns about reliability and the integration of AI into current educational systems remain prevalent. The findings underscore the necessity for professional development programs aimed at boosting educators' understanding and trust in AI-EdTech, while also advocating for culturally sensitive approaches in its implementation to ensure effective adoption and usage in diverse educational environments.
Key Applications
AI-EdTech
Context: K-12 education across six countries (Brazil, Israel, Japan, Norway, Sweden, USA)
Implementation: Survey of 508 K-12 teachers assessing their trust in AI-EdTech and factors influencing it.
Outcomes: Higher self-efficacy and AI understanding correlate with perceived benefits and reduced concerns about AI-EdTech, leading to increased trust.
Challenges: Concerns about AI reliability, lack of widespread adoption, and varying cultural perceptions of AI.
Implementation Barriers
Cultural Barrier
Cultural differences influence teachers' attitudes toward AI-EdTech, affecting trust and adoption.
Proposed Solutions: Implement culturally sensitive professional development programs to address specific concerns and enhance understanding.
Knowledge Barrier
Insufficient understanding of AI and its applications among teachers leads to mistrust.
Proposed Solutions: Provide targeted professional development to improve AI literacy and self-efficacy.
Technological Barrier
Concerns regarding the reliability of AI-EdTech and its integration into existing educational practices.
Proposed Solutions: Ensure transparency and explainability of AI algorithms to foster trust.
Project Team
Olga Viberg
Researcher
Mutlu Cukurova
Researcher
Yael Feldman-Maggor
Researcher
Giora Alexandron
Researcher
Shizuka Shirai
Researcher
Susumu Kanemune
Researcher
Barbara Wasson
Researcher
Cathrine Tømte
Researcher
Daniel Spikol
Researcher
Marcelo Milrad
Researcher
Raquel Coelho
Researcher
René F. Kizilcec
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Olga Viberg, Mutlu Cukurova, Yael Feldman-Maggor, Giora Alexandron, Shizuka Shirai, Susumu Kanemune, Barbara Wasson, Cathrine Tømte, Daniel Spikol, Marcelo Milrad, Raquel Coelho, René F. Kizilcec
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai