Publication: Enchancing Adversarial Robustness in Machine Learning: Techniques and Evaluations
dc.contributor.author | Ahmed Yacine Bouchouareb | |
dc.date.accessioned | 2025-01-25T10:28:43Z | |
dc.date.available | 2025-01-25T10:28:43Z | |
dc.date.issued | 2025-01-25 | |
dc.description | Ce m´emoire de master vise `a fournir une revue compl`ete de la litt´erature sur la ro-bustesse des mod`eles d’apprentissage automatique face aux attaques adversariales. Les objectifs principaux sont d’explorer les m´ethodologies existantes, de mettre en avant les r´esultats de recherche majeurs et d’identifier les lacunes dans les connais-sances actuelles. Le rapport examine les approches bas´ees sur les autoencodeurs pour d´etecter les exemples adversariaux ainsi que d’autres techniques d´efensives telles que l’entraˆınement adversarial et les techniques de r´egularisation. Diverses m´ethodes de g´en´eration d’attaques adversariales, comme FGSM[10] et PGD[17], sont analys´ees de mani`ere approfondie. Les connaissances acquises serviront de base solide pour le d´eveloppement de mod`eles plus robustes dans les recherches futures. | |
dc.description.abstract | This master’s report aims to provide a comprehensive review of the literature on the robustness of machine learning models against adversarial attacks. The pri-mary objectives are to explore existing methodologies, highlight key research find-ings, and identify gaps in current knowledge. The report examines autoencoder-based approaches for detecting adversarial examples as well as other defensive techniques such as adversarial training and regularization techniques. Various adversarial crafting methods, such as Fast Gradient Sign Method (FGSM)[10] and Projected Gradient Descent (PGD)[17], are analyzed in depth. The insights gained will serve as a solid foundation for the development of more robust models in future research. | |
dc.identifier.uri | https://dspace.estin.dz/handle/123456789/36 | |
dc.language.iso | en | |
dc.publisher | Tassadit | |
dc.subject | Machine Learning | |
dc.subject | Adversarial Examples | |
dc.subject | Robustness | |
dc.subject | Autoen-coders | |
dc.subject | FGSM | |
dc.subject | PGD | |
dc.subject | Anomaly Detection | |
dc.subject | Adversarial Attacks | |
dc.subject | Apprentissage automatique | |
dc.subject | Exemples adversariaux | |
dc.subject | Robustesse | |
dc.subject | Autoencodeurs | |
dc.subject | D´etection d’anomalies | |
dc.subject | Attaques adversariales | |
dc.title | Enchancing Adversarial Robustness in Machine Learning: Techniques and Evaluations | |
dc.type | Thesis | |
dspace.entity.type | Publication |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- master_mem_bouchouareb - AHMEDYACINE BOUCHOUAREB.pdf
- Size:
- 1.61 MB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed to upon submission
- Description: