Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Hardening machine learning Denial of Service (DoS) defences against adversarial attacks in IoT smart home networks

Anthi, Eirini, Williams, Lowri, Javed, Amir ORCID: and Burnap, Peter ORCID: 2021. Hardening machine learning Denial of Service (DoS) defences against adversarial attacks in IoT smart home networks. Computers and Security 108 , 102352. 10.1016/j.cose.2021.102352

[thumbnail of 1-s2.0-S0167404821001760-main.pdf]
PDF - Published Version
Available under License Creative Commons Attribution.

Download (992kB) | Preview


Machine learning based Intrusion Detection Systems (IDS) allow flexible and efficient automated detection of cyberattacks in Internet of Things (IoT) networks. However, this has also created an additional attack vector; the machine learning models which support the IDS's decisions may also be subject to cyberattacks known as Adversarial Machine Learning (AML). In the context of IoT, AML can be used to manipulate data and network traffic that traverse through such devices. These perturbations increase the confusion in the decision boundaries of the machine learning classifier, where malicious network packets are often miss-classified as being benign. Consequently, such errors are bypassed by machine learning based detectors, which increases the potential of significantly delaying attack detection and further consequences such as personal information leakage, damaged hardware, and financial loss. Given the impact that these attacks may have, this paper proposes a rule-based approach towards generating AML attack samples and explores how they can be used to target a range of supervised machine learning classifiers used for detecting Denial of Service attacks in an IoT smart home network. The analysis explores which DoS packet features to perturb and how such adversarial samples can support increasing the robustness of supervised models using adversarial training. The results demonstrated that the performance of all the top performing classifiers were affected, decreasing a maximum of 47.2 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Additional Information: This is an open access article under the CC BY license (
Publisher: Elsevier
ISSN: 0167-4048
Funders: EPSRC
Date of First Compliant Deposit: 25 May 2021
Date of Acceptance: 24 May 2021
Last Modified: 08 Nov 2023 12:24

Citation Data

Cited 16 times in Scopus. View in Scopus. Powered By Scopus® Data

Actions (repository staff only)

Edit Item Edit Item


Downloads per month over past year

View more statistics