Evaluation of Machine learning Models for Detecting Adversarial attacks on Anomaly Detection Oriented Dataset

Loading...
Thumbnail Image

Date

2025-01-21

Journal Title

Journal ISSN

Volume Title

Publisher

Tassadit

Abstract

This report evaluates the capability of machine learning models in detecting ad versarial attacks on a given dataset, with a test on the NSL-KDD dataset. The study’s objectives are twofold: first, to analyze the dynamics of the autoencoder’s reconstruction loss for normal, anomalous, and adversarial data points; second, to benchmark various candidate models, including Support Vector Machines (SVM), Decision Trees, and Naive Bayes, in detecting adversarial data crafted using Fast Gradient Sign Method (FGSM)[5] and Projected Gradient Descent (PGD)[10] techniques. Additionally, this research tests a feature engineering technique that considers the reconstruction loss as a vector[21], as suggested in recent literature. The results demonstrate that the reconstruction loss exhibits similar behavior between anomalous and adversarial examples, differentiating them from normal records in terms of mean and variance. Furthermore, the study reveals that the benchmarked models face significant challenges in detecting PGD attacks com pared to FGSM attacks.

Description

Keywords

Machine Learning, Adversarial Examples, Robustness, Autoen coders, FGSM, PGD, Anomaly Detection, Adversarial Atta

Citation