"I Would Never Trust Anything Western": Kumu (Educator) Perspectives on Use of LLMs for Culturally Revitalizing CS Education in Hawaiian Schools
Project Overview
The document explores the use of generative AI, specifically large language models (LLMs), in Hawaiian public education, particularly in Kaiapuni programs dedicated to Hawaiian culture and language. It outlines the promising applications of LLMs in enhancing curriculum development and educational content, while also addressing considerable challenges, including cultural misalignment and the reliability of AI-generated information. The research underscores the importance of developing culturally responsive AI tools that resonate with Hawaiian values and pedagogical approaches, highlighting the need to alleviate educators’ skepticism regarding AI's capacity to honor Indigenous knowledge systems. Ultimately, the findings stress that successful integration of AI in education requires a nuanced understanding of cultural contexts and a commitment to accuracy and respect for local traditions.
Key Applications
Large Language Models (LLMs) for curriculum development
Context: Hawaiian public schools with Kaiapuni programs, targeting educators in K-12 settings
Implementation: Surveys and interviews with kumu (educators) to assess the integration of LLMs in creating culturally relevant materials
Outcomes: Time-saving advantages in lesson planning and curriculum development; potential for increased engagement
Challenges: Cultural misalignment, reliability of outputs, and the need for educators to verify AI-generated content
Implementation Barriers
Cultural Misalignment and Data Sovereignty
AI tools may produce content that fails to align with Hawaiian cultural values and perspectives, leading to cultural misrepresentation. Additionally, there are concerns about the ownership and validity of content generated by AI, which may not accurately reflect local knowledge or cultural context.
Proposed Solutions: AI systems should be developed in collaboration with Indigenous communities to ensure cultural sensitivity and accuracy. Developing AI tools that prioritize local sources and Hawaiian knowledge systems, along with clear citation practices.
Reliability of AI Outputs
LLMs are prone to generating inaccurate or biased content, requiring educators to review and refine AI-generated outputs.
Proposed Solutions: Implementing robust training for educators to better understand AI tools and ensuring transparency about the sources of AI-generated information.
Technological Familiarity
Educators often lack familiarity with AI tools and prompt engineering, hindering effective adoption.
Proposed Solutions: Providing comprehensive training and user-friendly interfaces for educators to increase comfort and confidence in using AI tools.
Project Team
Manas Mhasakar
Researcher
Rachel Baker-Ramos
Researcher
Ben Carter
Researcher
Evyn-Bree Helekahi-Kaiwi
Researcher
Josiah Hester
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Manas Mhasakar, Rachel Baker-Ramos, Ben Carter, Evyn-Bree Helekahi-Kaiwi, Josiah Hester
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18