Skip to main content Skip to navigation

A Generative Security Application Engineering Curriculum

Project Overview

The document presents a curriculum aimed at incorporating generative AI and large language models (LLMs) into cybersecurity education, addressing the need to equip students with skills relevant to the rapidly changing cybersecurity landscape. It highlights the potential of generative AI to automate security tasks, detailing diverse applications such as threat detection, incident response, and vulnerability assessment, while also acknowledging the associated challenges, including ethical considerations and the risks of misuse. The curriculum provides hands-on experiences, encouraging students to engage in practical exercises that involve building LLM applications, understanding security vulnerabilities, and navigating the ethical implications of deploying AI technologies in security contexts. Overall, the findings suggest that integrating generative AI into education not only enhances students' technical competencies but also fosters critical thinking about the responsible use of AI in cybersecurity.

Key Applications

Generative Security Application Engineering Curriculum

Context: Cybersecurity education for university students

Implementation: Students learn to develop LLM applications using frameworks like LangChain, focusing on security tasks and vulnerabilities.

Outcomes: Students gain practical skills in utilizing generative AI for security tasks such as incident response and vulnerability discovery.

Challenges: Students face issues with model accuracy, adversarial attacks, and the need for robust security in AI applications.

Implementation Barriers

Technical Barrier

Challenges related to ensuring the security of LLM applications against adversarial attacks and vulnerabilities.

Proposed Solutions: Educating students on security vulnerabilities and implementing secure coding practices.

Cost Barrier

High costs associated with running LLMs on cloud infrastructure.

Proposed Solutions: Teaching students to evaluate the cost-effectiveness of different models and access options.

Ethical Barrier

The potential for misuse of AI in generating social engineering attacks.

Proposed Solutions: Incorporating discussions on ethics and responsible AI use into the curriculum.

Project Team

Wu-chang Feng

Researcher

David Baker-Robinson

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Wu-chang Feng, David Baker-Robinson

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies