Intelligence Artificielle et Data Sciences
Permanent URI for this community
Browse
Browsing Intelligence Artificielle et Data Sciences by Subject "Anomaly Detection"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Publication Enchancing Adversarial Robustness in Machine Learning: Techniques and Evaluations(Tassadit, 2025-01-25) Ahmed Yacine BouchouarebThis master’s report aims to provide a comprehensive review of the literature on the robustness of machine learning models against adversarial attacks. The pri-mary objectives are to explore existing methodologies, highlight key research find-ings, and identify gaps in current knowledge. The report examines autoencoder-based approaches for detecting adversarial examples as well as other defensive techniques such as adversarial training and regularization techniques. Various adversarial crafting methods, such as Fast Gradient Sign Method (FGSM)[10] and Projected Gradient Descent (PGD)[17], are analyzed in depth. The insights gained will serve as a solid foundation for the development of more robust models in future research.Item Evaluation of Machine learning Models for Detecting Adversarial attacks on Anomaly Detection Oriented Dataset(Tassadit, 2025-01-21) Ahmed Yacine BouchouarebThis report evaluates the capability of machine learning models in detecting ad versarial attacks on a given dataset, with a test on the NSL-KDD dataset. The study’s objectives are twofold: first, to analyze the dynamics of the autoencoder’s reconstruction loss for normal, anomalous, and adversarial data points; second, to benchmark various candidate models, including Support Vector Machines (SVM), Decision Trees, and Naive Bayes, in detecting adversarial data crafted using Fast Gradient Sign Method (FGSM)[5] and Projected Gradient Descent (PGD)[10] techniques. Additionally, this research tests a feature engineering technique that considers the reconstruction loss as a vector[21], as suggested in recent literature. The results demonstrate that the reconstruction loss exhibits similar behavior between anomalous and adversarial examples, differentiating them from normal records in terms of mean and variance. Furthermore, the study reveals that the benchmarked models face significant challenges in detecting PGD attacks com pared to FGSM attacks.