Publication: Enchancing Adversarial Robustness in Machine Learning: Techniques and Evaluations
Loading...
Date
2025-01-25
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Tassadit
Abstract
This master’s report aims to provide a comprehensive review of the literature on the robustness of machine learning models against adversarial attacks. The pri-mary objectives are to explore existing methodologies, highlight key research find-ings, and identify gaps in current knowledge. The report examines autoencoder-based approaches for detecting adversarial examples as well as other defensive techniques such as adversarial training and regularization techniques. Various adversarial crafting methods, such as Fast Gradient Sign Method (FGSM)[10] and Projected Gradient Descent (PGD)[17], are analyzed in depth. The insights gained will serve as a solid foundation for the development of more robust models in future research.
Description
Ce m´emoire de master vise `a fournir une revue compl`ete de la litt´erature sur la ro-bustesse des mod`eles d’apprentissage automatique face aux attaques adversariales. Les objectifs principaux sont d’explorer les m´ethodologies existantes, de mettre en avant les r´esultats de recherche majeurs et d’identifier les lacunes dans les connais-sances actuelles. Le rapport examine les approches bas´ees sur les autoencodeurs pour d´etecter les exemples adversariaux ainsi que d’autres techniques d´efensives telles que l’entraˆınement adversarial et les techniques de r´egularisation. Diverses m´ethodes de g´en´eration d’attaques adversariales, comme FGSM[10] et PGD[17], sont analys´ees de mani`ere approfondie. Les connaissances acquises serviront de base solide pour le d´eveloppement de mod`eles plus robustes dans les recherches futures.
Keywords
Machine Learning, Adversarial Examples, Robustness, Autoen-coders, FGSM, PGD, Anomaly Detection, Adversarial Attacks, Apprentissage automatique, Exemples adversariaux, Robustesse, Autoencodeurs, D´etection d’anomalies, Attaques adversariales