Skip to main content Skip to navigation

Simulating LLM-to-LLM Tutoring for Multilingual Math Feedback

Project Overview

The document explores the application of generative AI, specifically large language models (LLMs), in the field of education, with a focus on enhancing mathematical learning through multilingual feedback. It details a study that simulated tutor-student interactions using LLMs, revealing that when students receive hints and feedback in their native languages, particularly in low-resource languages, their learning outcomes significantly improve. The findings underscore the effectiveness of personalized feedback strategies tailored to individual student needs, highlighting the crucial role of model selection in optimizing educational performance. Overall, the use of generative AI in education demonstrates substantial potential for improving student engagement and understanding, particularly in diverse linguistic contexts, thereby contributing to more equitable educational opportunities.

Key Applications

Simulating LLM-to-LLM tutoring for multilingual math feedback

Context: Educational context focused on mathematics for multilingual students

Implementation: Simulated interactions where a stronger LLM generates hints for a weaker LLM simulating a student across various languages

Outcomes: Significant learning gains when feedback is aligned with the student’s native language, especially in low-resource languages

Challenges: Variability in performance across different languages and the need for effective hint generation strategies

Implementation Barriers

Technical Barrier

The performance of LLMs significantly varies across different languages, particularly low-resource languages.

Proposed Solutions: Improving multilingual training datasets and fine-tuning models specifically for low-resource languages.

Educational Barrier

The complexity of creating effective instructional hints that adapt to diverse student needs and language proficiencies.

Proposed Solutions: Utilizing feedback from educational research and expert reviews to refine hint quality.

Project Team

Junior Cedric Tonga

Researcher

KV Aditya Srivatsa

Researcher

Kaushal Kumar Maurya

Researcher

Fajri Koto

Researcher

Ekaterina Kochmar

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Junior Cedric Tonga, KV Aditya Srivatsa, Kaushal Kumar Maurya, Fajri Koto, Ekaterina Kochmar

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies