Skip to main content Skip to navigation

Analysis of Student-LLM Interaction in a Software Engineering Project

Project Overview

This document examines the use of Large Language Models (LLMs) in software engineering education, particularly how undergraduate students engage with AI tools such as ChatGPT and GitHub Copilot in a project-based course. It underscores the significant role of LLMs in facilitating code generation, summarization, and debugging tasks. The research findings reveal that although students heavily depend on these AI tools at the start of the course, their reliance decreases over time, suggesting an enhancement in their prompting skills and a deeper comprehension of how to effectively integrate AI into their coding practices. This shift indicates a positive educational outcome, as students become more adept at utilizing AI tools in a manner that complements their learning and development as programmers. Overall, the integration of generative AI in education not only aids in immediate task completion but also fosters long-term learning and skill acquisition among students.

Key Applications

Integration of ChatGPT and GitHub Copilot in software engineering projects

Context: Undergraduate software engineering course involving a 13-week project to develop a Static Program Analyzer (SPA)

Implementation: Students performed tasks using AI tools, received feedback and data on their interactions, and analyzed code generation and integration over multiple milestones.

Outcomes: Improved code quality through conversational interactions, enhanced student engagement with LLMs, and better prompt engineering leading to more effective code generation.

Challenges: Initial reliance on LLMs may lead to dependency; some AI-generated code was more complex and required refinement.

Implementation Barriers

Dependency

Students may become overly reliant on AI tools for coding tasks, potentially undermining their learning and problem-solving skills.

Proposed Solutions: Educators should teach effective prompting techniques and encourage critical assessment of AI-generated code.

Complexity of Outputs

AI-generated code, particularly from Copilot, can be overly complex and may require significant human intervention to refine.

Proposed Solutions: Training students to use LLMs effectively and promoting iterative code refinement through conversational interactions.

Project Team

Agrawal Naman

Researcher

Ridwan Shariffdeen

Researcher

Guanlin Wang

Researcher

Sanka Rasnayaka

Researcher

Ganesh Neelakanta Iyer

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Agrawal Naman, Ridwan Shariffdeen, Guanlin Wang, Sanka Rasnayaka, Ganesh Neelakanta Iyer

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies