Skip to main content Skip to navigation

Computational Thinking Reasoning in Large Language Models

Project Overview

The document explores the application of generative AI in education, specifically focusing on the integration of computational thinking into large language models (LLMs) via the Computational Thinking Model (CTM). This innovative framework enhances LLMs' reasoning abilities by embedding structured methodologies like decomposition and abstraction, which facilitate the models' capacity to tackle complex problems. By enabling interactive code execution alongside natural language reasoning, the CTM significantly boosts performance in coding and mathematical tasks compared to traditional models. The findings indicate that this approach not only improves the accuracy and efficiency of problem-solving but also enriches the learning experience for students, showcasing the transformative potential of generative AI in educational contexts. Overall, the document highlights the promising outcomes of incorporating advanced AI methodologies into educational practices, paving the way for more effective learning tools and strategies.

Key Applications

Computational Thinking Model (CTM)

Context: Educational context for students learning computational thinking and problem-solving techniques.

Implementation: Integrated computational thinking principles into large language models through a two-phase training strategy involving supervised fine-tuning and reinforcement learning.

Outcomes: CTM outperformed conventional reasoning models and tool-augmented baselines in accuracy, interpretability, and generalizability across code generation and mathematical benchmarks.

Challenges: Current LLMs struggle with self-correction during reasoning, leading to errors in complex problem-solving.

Implementation Barriers

Technical Barrier

Current LLMs lack robust mechanisms for self-correction and verification of reasoning outputs, leading to errors propagating through the reasoning process.

Proposed Solutions: Implementing feedback loops that enable dynamic self-editing and validation of intermediate steps, as demonstrated in the CTM.

Project Team

Kechi Zhang

Researcher

Ge Li

Researcher

Jia Li

Researcher

Huangzhao Zhang

Researcher

Jingjing Xu

Researcher

Hao Zhu

Researcher

Lecheng Wang

Researcher

Jia Li

Researcher

Yihong Dong

Researcher

Jing Mai

Researcher

Bin Gu

Researcher

Zhi Jin

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Kechi Zhang, Ge Li, Jia Li, Huangzhao Zhang, Jingjing Xu, Hao Zhu, Lecheng Wang, Jia Li, Yihong Dong, Jing Mai, Bin Gu, Zhi Jin

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies