Skip to main content Skip to navigation

Content Knowledge Identification with Multi-Agent Large Language Models (LLMs)

Project Overview

The document explores the innovative use of generative AI in education through the implementation of LLMAgent-CK, a Multi-Agent Large Language Model (LLM) framework designed to assess teachers' mathematical content knowledge (CK) within professional development (PD) programs. By automating the evaluation of user responses, this system significantly enhances asynchronous PD, eliminating the reliance on extensive human annotation. The collaborative functionality of multiple LLM agents not only boosts the accuracy of CK identification but also provides detailed explanatory feedback, leading to impressive precision in the assessment process. Overall, the findings suggest that generative AI can effectively streamline teacher training and development, fostering improved educational outcomes and professional growth in mathematics education.

Key Applications

LLMAgent-CK

Context: Asynchronous professional development for mathematics teachers, particularly those in rural areas.

Implementation: Implemented as a framework utilizing multiple LLM agents to assess teacher responses to mathematical content knowledge questions.

Outcomes: Achieved up to 95.83% precision scores in identifying learning goals, demonstrating human-like correction capabilities.

Challenges: Challenges include reliance on diverse user responses, scarcity of high-quality annotated data, and low interpretability of predictions.

Implementation Barriers

Technical barrier

Current automatic CK identification methods face challenges such as the diversity of user responses and the need for high-quality, annotated data.

Proposed Solutions: The LLMAgent-CK framework addresses these by using LLMs capable of generalization without requiring labeled data.

Interpretability barrier

Deep learning models often suffer from poor interpretability, which limits their usage in educational scenarios.

Proposed Solutions: LLMAgent-CK provides generated reasons alongside identified results to enhance understanding and confidence in the outputs.

Project Team

Kaiqi Yang

Researcher

Yucheng Chu

Researcher

Taylor Darwin

Researcher

Ahreum Han

Researcher

Hang Li

Researcher

Hongzhi Wen

Researcher

Yasemin Copur-Gencturk

Researcher

Jiliang Tang

Researcher

Hui Liu

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Kaiqi Yang, Yucheng Chu, Taylor Darwin, Ahreum Han, Hang Li, Hongzhi Wen, Yasemin Copur-Gencturk, Jiliang Tang, Hui Liu

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies