Chaudhary, Arunima ORCID: https://orcid.org/0009-0008-5265-8679, Colombo, Gualtiero, Javed, Amir ORCID: https://orcid.org/0000-0001-9761-0945, Haseeb, Junaid, Kumar, Vimal and Larsen, Richard
2026.
CodeWars: Using LLMs for vulnerability analysis in cybersecurity education.
Presented at: 29th Colloquium: Cybersecurity Education in the Age of AI and Automation and Ambiguity,
Seattle, WA, USA,
12-14 November 2025.
|
Preview |
PDF
- Accepted Post-Print Version
Download (700kB) | Preview |
Abstract
Large Language Models (LLMs) are increasingly explored as tools for software development and could further constitute a supplementary source for the development of varied examples intended for pedagogical use. While they can improve productivity, their ability to produce code that is both secure and compliant with Secure Software Development (SSD) practices remains uncertain, raising concerns about their role in cybersecurity education. If LLMs are to be integrated effectively, students must be trained to critically evaluate generated code for correctness and vulnerabilities, raising an important question: How can LLM-generated code be effectively and securely incorporated into Cybersecurity education for teaching vulnerability analysis? This paper introduces CodeWars, a novel teaching methodology that combines LLM-generated and human-written code to examine how students engage with vulnerability detection tasks. CodeWars was implemented as a pilot study with a total of 32 students at Cardiff University and the University of Waikato, where students analyzed flawed, secure, and mixed-origin code samples. By comparing student approaches, analysis, and perceptions, the study provides insights into how vulnerabilities are detected, how code origins are distinguished, and how SSD practices are applied. Our analysis of student feedback and interviews indicates that Codewars produced structured and accessible code, simplifying vulnerability identification and offering educators the means to efficiently develop varied SSD teaching applications. These findings illuminate both the advantages and constraints of employing LLMs in secure coding and position this study as a foundational step toward the responsible adoption of AI in Cybersecurity Education.
| Item Type: | Conference or Workshop Item - published (Paper) |
|---|---|
| Status: | In Press |
| Schools: | Schools > Computer Science & Informatics |
| Date of First Compliant Deposit: | 13 February 2026 |
| Last Modified: | 18 Feb 2026 16:00 |
| URI: | https://orca.cardiff.ac.uk/id/eprint/184816 |
Actions (repository staff only)
![]() |
Edit Item |





Download Statistics
Download Statistics