Skip to main content Skip to navigation

Augmenting deep neural networks with symbolic knowledge: Towards trustworthy and interpretable AI for education

Project Overview

The document explores the integration of symbolic knowledge within artificial neural networks (ANNs) to improve their effectiveness in education, addressing challenges such as educational knowledge incorporation, bias management, and system interpretability. It introduces a neural-symbolic AI approach (NSAI), which incorporates educational insights into deep neural networks, thereby enhancing the modeling of learners' computational thinking. This NSAI methodology demonstrates superior performance compared to traditional ANN methods, particularly in terms of generalizability and interpretability. The findings indicate that such an approach not only enhances the educational applicability of AI but also fosters the development of trustworthy AI applications in educational settings, ultimately aiming to create more effective and reliable learning tools.

Key Applications

Neural-Symbolic AI (NSAI) approach

Context: Educational game AutoThinking aimed at improving computational thinking skills of learners.

Implementation: Incorporates educational knowledge into deep neural networks during training to model learners' computational thinking.

Outcomes: NSAI shows better generalizability and interpretability compared to traditional deep neural networks.

Challenges: Limited application of neural-symbolic AI in education, reliance on spurious correlations in traditional ANNs.

Implementation Barriers

Technical

Difficulty in incorporating symbolic educational knowledge into ANNs.

Proposed Solutions: Utilize neural-symbolic AI frameworks to inject and extract educational knowledge.

Bias and Fairness

Deep neural networks may learn spurious correlations, leading to biases in predictions.

Proposed Solutions: Implement mechanisms to control biases, such as integrating explicit educational knowledge.

Interpretability

Lack of interpretability in traditional ANNs hampers trust and understanding among educators and students.

Proposed Solutions: Extract rules from trained networks to enhance interpretability and reasoning.

Project Team

Danial Hooshyar

Researcher

Roger Azevedo

Researcher

Yeongwook Yang

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Danial Hooshyar, Roger Azevedo, Yeongwook Yang

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies