Publication:
Enchancing Adversarial Robustness in Machine Learning: Techniques and Evaluations

dc.contributor.authorAhmed Yacine Bouchouareb
dc.date.accessioned2025-01-25T10:28:43Z
dc.date.available2025-01-25T10:28:43Z
dc.date.issued2025-01-25
dc.descriptionCe m´emoire de master vise `a fournir une revue compl`ete de la litt´erature sur la ro-bustesse des mod`eles d’apprentissage automatique face aux attaques adversariales. Les objectifs principaux sont d’explorer les m´ethodologies existantes, de mettre en avant les r´esultats de recherche majeurs et d’identifier les lacunes dans les connais-sances actuelles. Le rapport examine les approches bas´ees sur les autoencodeurs pour d´etecter les exemples adversariaux ainsi que d’autres techniques d´efensives telles que l’entraˆınement adversarial et les techniques de r´egularisation. Diverses m´ethodes de g´en´eration d’attaques adversariales, comme FGSM[10] et PGD[17], sont analys´ees de mani`ere approfondie. Les connaissances acquises serviront de base solide pour le d´eveloppement de mod`eles plus robustes dans les recherches futures.
dc.description.abstractThis master’s report aims to provide a comprehensive review of the literature on the robustness of machine learning models against adversarial attacks. The pri-mary objectives are to explore existing methodologies, highlight key research find-ings, and identify gaps in current knowledge. The report examines autoencoder-based approaches for detecting adversarial examples as well as other defensive techniques such as adversarial training and regularization techniques. Various adversarial crafting methods, such as Fast Gradient Sign Method (FGSM)[10] and Projected Gradient Descent (PGD)[17], are analyzed in depth. The insights gained will serve as a solid foundation for the development of more robust models in future research.
dc.identifier.urihttps://dspace.estin.dz/handle/123456789/36
dc.language.isoen
dc.publisherTassadit
dc.subjectMachine Learning
dc.subjectAdversarial Examples
dc.subjectRobustness
dc.subjectAutoen-coders
dc.subjectFGSM
dc.subjectPGD
dc.subjectAnomaly Detection
dc.subjectAdversarial Attacks
dc.subjectApprentissage automatique
dc.subjectExemples adversariaux
dc.subjectRobustesse
dc.subjectAutoencodeurs
dc.subjectD´etection d’anomalies
dc.subjectAttaques adversariales
dc.titleEnchancing Adversarial Robustness in Machine Learning: Techniques and Evaluations
dc.typeThesis
dspace.entity.typePublication

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
master_mem_bouchouareb - AHMEDYACINE BOUCHOUAREB.pdf
Size:
1.61 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description: