DEV Community

sFluk-week
sFluk-week

Posted on

My summary Invited Talk 2

Adversarial Attacks and Defenses in
Deep Learning Systems: Threats,
Mechanisms, and Countermeasures

Cyberattacks on deep learning systems, particularly through modified images known as adversarial examples, can severely mislead models into making incorrect predictions. The content focuses on the vulnerability of neural network architectures such as Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), which may impact real world safety, for example in autonomous driving systems.

The author explains the mechanism of creating small perturbations that are invisible to humans but can completely degrade AI accuracy. In addition, various defense measures are analyzed, including the introduction of the STRAP-ViT technique, a new approach for detecting and mitigating patch attacks without modifying the core model architecture. This helps improve the reliability of artificial intelligence systems in the future.

Top comments (0)