Evaluating the Robustness of Deep Learning Models Against Adversarial Attacks
Abstract
With the rapid integration of Artificial Intelligence (AI) into various sectors, contemporary technological landscapes have been revolutionized. This has notably enhanced capabilities in Deep Learning models, which are used in a wide range of applications, from automated vehicles to security surveillance. Nevertheless, the vulnerability of these systems to adversarial attacks, in which inputs are cleverly manipulated to mislead AI models, presents a substantial risk to their dependability and safety. This master's thesis explores the durability of object detection models when faced with adversarial attacks, with the goal of connecting theoretical research to real-world application robustness.The research primarily focuses on evaluating deep learning models, in adversarial contexts created using techniques like the Fast Gradient Sign Method (FGSM). These models were selected based on their widespread usage and demonstrated success in handling intricate image datasets, such as CIFAR-10, which served as the main dataset for training and testing the models. This dataset offers a wide variety of images for thorough testing of the models' ability to accurately recognize and classify distorted data.With the goal of gaining a deep understanding, the thesis examines various challenging situations to evaluate how well the model performs and to discover ways to enhance its ability to withstand adversarial attacks. The study utilizes a well-organized approach that includes a thorough examination of existing literature to situate the research within the current academic and practical contexts of AI security. The empirical analysis included thorough model training methods, utilizing both traditional and adversarial training techniques to evaluate the effectiveness of various training modifications.This research adds to the ongoing discussion on improving AI security, providing valuable strategies to strengthen DL models against the ever-changing risk of adversarial attacks. Through deepening our comprehension of these vulnerabilities and defenses, the study contributes to the creation of AI applications that are more robust, guaranteeing their dependability and trustworthiness in crucial real-world operations.