Ahmed Yacine Bouchouareb2025-01-212025-01-212025-01-21https://dspace.estin.dz/handle/123456789/23This report evaluates the capability of machine learning models in detecting ad versarial attacks on a given dataset, with a test on the NSL-KDD dataset. The study’s objectives are twofold: first, to analyze the dynamics of the autoencoder’s reconstruction loss for normal, anomalous, and adversarial data points; second, to benchmark various candidate models, including Support Vector Machines (SVM), Decision Trees, and Naive Bayes, in detecting adversarial data crafted using Fast Gradient Sign Method (FGSM)[5] and Projected Gradient Descent (PGD)[10] techniques. Additionally, this research tests a feature engineering technique that considers the reconstruction loss as a vector[21], as suggested in recent literature. The results demonstrate that the reconstruction loss exhibits similar behavior between anomalous and adversarial examples, differentiating them from normal records in terms of mean and variance. Furthermore, the study reveals that the benchmarked models face significant challenges in detecting PGD attacks com pared to FGSM attacks.enMachine LearningAdversarial ExamplesRobustnessAutoen codersFGSMPGDAnomaly DetectionAdversarial AttaEvaluation of Machine learning Models for Detecting Adversarial attacks on Anomaly Detection Oriented DatasetThesis