WIP: Large Language Model-Enhanced Smart Tutor for Undergraduate Circuit Analysis
Project Overview
The document explores the integration of generative AI in education through the development of a smart tutor specifically designed for undergraduate circuit analysis courses. By leveraging large language models (LLMs), this AI-enabled tutor offers personalized homework assessments, feedback, and context-aware support, which significantly improves the learning experience for students. Feedback from users has shown high levels of satisfaction with the tutor's capabilities, suggesting its effectiveness in enhancing comprehension and engagement in complex subjects. The findings indicate promising outcomes for the use of AI in educational settings, with the potential for expanding its application to other engineering disciplines, thereby paving the way for a broader impact on teaching and learning methodologies in higher education. Overall, the initiative demonstrates the transformative potential of generative AI in delivering tailored educational support and fostering student success.
Key Applications
AI-enabled smart tutor for undergraduate circuit analysis
Context: Undergraduate circuit analysis course at a public research university in the Southeastern USA
Implementation: Deployed on Microsoft Azure, it provides real-time homework assistance and feedback through an LLM, using a context-specific database for accurate modeling.
Outcomes: 90.9% of students reported satisfaction with the tutor; improved insights for instructors regarding student difficulties and questions.
Challenges: LLMs struggle with diagram recognition, mathematical computations, and can produce hallucinations in responses.
Implementation Barriers
Technical Limitations
LLMs have difficulty recognizing and interpreting scientific and engineering diagrams, and they have limited mathematical capabilities.
Proposed Solutions: Develop improved diagram recognition methods and enhance LLM capabilities for better mathematical reasoning.
Reliability Issues
LLMs may provide incorrect responses, which can mislead students and negatively impact learning.
Proposed Solutions: Implement strategies to reduce hallucinations and enhance response reliability.
Labor Intensity
Preparing a structured database for context-specific support can be labor-intensive.
Proposed Solutions: Explore efficient database management methods to streamline data preparation.
Project Team
Liangliang Chen
Researcher
Huiru Xie
Researcher
Jacqueline Rohde
Researcher
Ying Zhang
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Liangliang Chen, Huiru Xie, Jacqueline Rohde, Ying Zhang
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai