Skip to main content Skip to navigation

Mathemyths: Leveraging Large Language Models to Teach Mathematical Language through Child-AI Co-Creative Storytelling

Project Overview

The document explores the use of generative AI in education, particularly through the implementation of Mathemyths, an AI-driven storytelling tool designed for children aged 4-8 to teach mathematical language. By utilizing large language models (LLMs), Mathemyths fosters interactive and creative learning experiences, allowing children to engage in co-creative storytelling with an AI partner, which can effectively parallel human interaction. The findings indicate that with proper prompt engineering, AI can significantly enhance children's engagement and learning outcomes. Furthermore, the document outlines various applications of generative AI in educational contexts, emphasizing its role in collaborative storytelling, providing personalized feedback, and integrating mathematical concepts in engaging ways. It also acknowledges the unique challenges associated with integrating AI in education, such as the need for effective evaluation methods to assess its impact. Overall, the document underscores the transformative potential of generative AI in creating enriched, interactive learning environments that promote the development of critical skills in young learners.

Key Applications

Mathemyths: An AI storytelling and math learning tool

Context: Designed for early childhood education, targeting children aged 4-8 in preschools and early elementary classrooms, to enhance their understanding of mathematical language and storytelling abilities through interactive engagement.

Implementation: The system operates as a dialogue-based storytelling agent where children co-create narratives with the AI. It incorporates mathematical terminology into the storytelling process and provides scaffolding for responses, engaging children in a creative and educational dialogue.

Outcomes: Children demonstrated improved engagement, understanding of mathematical concepts, and language skills comparable to interactions with human partners, showing significant progress in the definition and recall of mathematical terms.

Challenges: Challenges include variability in children's engagement levels based on age, the effectiveness of AI-generated prompts, ensuring accurate incorporation of educational content, and adapting the system to diverse learning paces, as well as potential misunderstandings due to imaginative elements.

Implementation Barriers

Technical Limitations

Challenges related to the unpredictability of outputs from LLMs, their ability to maintain context over extended interactions, and accurately interpreting children's inputs.

Proposed Solutions: Implement iterative prompt engineering and feedback loops to enhance output consistency and adherence to desired narrative structures. Continuous training of the AI model with diverse datasets and feedback loops for improvement.

Engagement Variation

Differences in engagement levels between younger and older children, with older children showing more responsiveness to human interactions.

Proposed Solutions: Design adaptive algorithms that can assess children's cognitive load and modify question complexity accordingly.

Content Hallucinations

LLMs may generate plausible-sounding but nonsensical content, which can confuse children.

Proposed Solutions: Incorporate post-processing techniques to monitor and correct AI-generated content, ensuring it aligns with educational goals.

User Acceptance Barrier

Teachers and parents may be hesitant to adopt AI technologies in educational contexts.

Proposed Solutions: Providing training and demonstration of the tool's effectiveness in enhancing learning outcomes.

Project Team

Chao Zhang

Researcher

Xuechen Liu

Researcher

Katherine Ziska

Researcher

Soobin Jeon

Researcher

Chi-Lin Yu

Researcher

Ying Xu

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Chao Zhang, Xuechen Liu, Katherine Ziska, Soobin Jeon, Chi-Lin Yu, Ying Xu

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies