With ChatGPT, do we have to rewrite our learning objectives -- CASE study in Cybersecurity
Project Overview
The document examines the role of generative AI, particularly ChatGPT, in enhancing cybersecurity education by advocating for a reevaluation of learning objectives in response to the capabilities of automation tools. It suggests utilizing Bloom's Taxonomy to refine and elevate these learning objectives, positing that AI can facilitate the development of higher-order thinking skills more effectively and efficiently. Through various case studies and examples, it illustrates the practical applications of ChatGPT in teaching complex cybersecurity concepts, highlighting its potential to transform educational methodologies. Overall, the findings indicate that integrating generative AI not only enriches the learning experience but also prepares students for the evolving demands of the cybersecurity field.
Key Applications
AI-assisted coding and learning tools
Context: Undergraduate education in computer science, specifically in cybersecurity and programming courses at institutions like Miami University. The integration of AI tools enhances learning experiences across various contexts where coding assignments and cybersecurity concepts are taught.
Implementation: Incorporating generative AI tools like ChatGPT and GitHub Copilot into curricula to assist students with coding assignments, projects, and understanding complex cybersecurity concepts. This includes using AI for content generation, coding assistance, and facilitating discussions around programming and security topics.
Outcomes: Enhanced ability for students to achieve higher-order learning outcomes, deeper understanding of complex concepts, improved efficiency in coding, and the ability to focus on design and problem-solving rather than just coding mechanics.
Challenges: Students must learn how to effectively prompt AI tools and verify outputs, which may lead to over-reliance on AI for basic tasks. Additionally, there is a risk of dependency hindering the development of fundamental coding skills and inadequate understanding of the code generated.
Implementation Barriers
Educational
Students may struggle to effectively use generative AI tools without proper training, and there may be insufficient understanding of how to evaluate AI outputs.
Proposed Solutions: Incorporate training sessions on how to use AI tools, focusing on prompt engineering and evaluating AI outputs.
Curricular
Existing curricula may not be designed to integrate AI tools effectively, lacking inclusion of AI tool usage in learning objectives and assessment strategies.
Proposed Solutions: Revise curricula to include AI tool usage as part of learning objectives and assessment strategies.
Ethical
Concerns about academic integrity, the authenticity of student work, and acceptable use of AI in assignments and assessments.
Proposed Solutions: Establish guidelines for acceptable AI use in assignments and assessments to ensure academic honesty.
Project Team
Peter Jamieson
Researcher
Suman Bhunia
Researcher
Dhananjai M. Rao
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Peter Jamieson, Suman Bhunia, Dhananjai M. Rao
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai