Skip to main content Skip to navigation

Effects of Human vs. Automatic Feedback on Students' Understanding of AI Concepts and Programming Style

Project Overview

The document explores the role of generative AI in education, focusing on a study that compares automatic grading tools with human feedback in a 300-level AI programming course. It underscores the advantages of human feedback in enhancing students' comprehension of algorithmic concepts and programming styles, which leads to improved exam performance and overall grades, particularly among students in the middle aptitude quartiles. While automatic feedback systems offer the benefit of streamlining the grading process, the findings indicate that they may not provide the depth of understanding that human evaluators can deliver. This highlights a critical insight into the balance between efficiency and educational effectiveness, suggesting that while generative AI can assist in grading, the nuanced feedback from human instructors remains essential for fostering deeper learning outcomes. Overall, the document emphasizes the need for a thoughtful integration of AI tools in educational contexts to maximize student achievement and understanding.

Key Applications

In-house grading tool for programming assignments

Context: 300-level AI course with programming assignments for undergraduate students

Implementation: Students were divided into two groups: one received computer-generated feedback while the other received human feedback. Each group's performance was analyzed through various assessments.

Outcomes: Students receiving human feedback performed better on understanding algorithmic concepts and overall course grades, especially in the middle quartiles.

Challenges: Labor-intensive grading process, potential for biases in grading, and black-box usage of grading tool by students.

Implementation Barriers

Operational

The labor-intensive nature of providing detailed human feedback on assignments delayed grading and affected timely feedback delivery.

Proposed Solutions: Switching to a model where human feedback is offered on request after initial automated grading results.

Technical

Students using the automatic grading tool as a black-box debugger rather than for genuine learning.

Proposed Solutions: Replacing score estimations with compilation checks and encouraging more submissions for feedback.

Project Team

Abe Leite

Researcher

Saúl A. Blanco

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Abe Leite, Saúl A. Blanco

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies