Skip to main content Skip to navigation

Generative AI Assistants in Software Development Education: A vision for integrating Generative AI into educational practice, not instinctively defending against it

Project Overview

This document examines the evolving role of generative AI (GAI), specifically tools like GitHub Copilot and ChatGPT, within software development education. It investigates how GAI can be effectively and responsibly integrated into teaching practices. Research, including interviews with industry professionals, reveals current usage patterns and challenges associated with GAI. The document offers pedagogical recommendations such as scaffolding techniques, revisions to assessment methods, and a phased transition for students to leverage GAI tools. Furthermore, it highlights potential obstacles, including copyright issues, concerns about code quality and sustainability, and the potential for bias in AI-generated content. The overall aim is to provide guidance for educators to navigate the opportunities and challenges presented by GAI in shaping the future of software development education.

Key Applications

AI-Powered Code Assistance

Context: Introductory programming assignments (CS1) and software development education, including brainstorming, problem-solving, and code comprehension.

Implementation: Students use AI tools (e.g., GitHub Copilot, ChatGPT) to assist with coding tasks, generate code, explain code, answer questions about code, and aid in system design and test case creation.

Outcomes: Can successfully complete programming assignments. Improves productivity and capacity. Can be used for system design, writing test cases and automation tests. Can help students understand code.

Challenges: Potential for cheating, over-reliance, negative impact on learners’ critical evaluation and problem-solving skills, code quality issues, and the risk of incorrect answers or suboptimal solutions, particularly for complex problems.

Implementation Barriers

Copyright

GitHub Copilot uses OpenAI’s Codex, trained on open-source software. There are concerns about ignoring project licenses.

Proposed Solutions: Permissive sourcing for modeling.

Code Quality & Convincing but Incorrect Code

Code generated by GAI may contain bugs if the sources contained bugs. Code can be functional but not have the desired outcome. Code can be generated that is functionally correct and compiles, but it may not have the desired or requested outcome.

Proposed Solutions: Built-in QA, human review of code, fine-tuning models on model code and coding patterns by educational institutions.

Sustainability

The computational cost of using and training GAI tools is high. There is a high energy requirement and AI generally “may have profound implications for the carbon footprint of the ICT sector”.

Proposed Solutions: Not mentioned.

Bias

AI systems can exhibit or perpetuate numerous biases. Biases need to be better understood, mitigated against (in the training data, system designs, and more), and better accounted for.

Proposed Solutions: Better understanding of biases, mitigation in training data and system designs.

Project Team

Christopher Bull

Researcher

Ahmed Kharrufa

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Christopher Bull, Ahmed Kharrufa

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gemini-2.0-flash-lite