NAISS
SUPR
NAISS Projects
SUPR
Robust Deep Neural Network for Mitigation of Adversarial Attacks
Dnr:

NAISS 2026/4-598

Type:

NAISS Small

Principal Investigator:

Mousumi Saha

Affiliation:

Mälardalens universitet

Start Date:

2026-03-24

End Date:

2027-04-01

Primary Classification:

10210: Artificial Intelligence

Webpage:

Allocation

Abstract

Deep learning models are increasingly used in safety-critical domains such as healthcare, autonomous driving, and financial systems. Despite their success, these models are vulnerable to adversarial attacks, where small and carefully designed perturbations in the input can cause incorrect predictions while remaining almost invisible to humans. Such weaknesses raise important concerns about the reliability and security of AI systems in real-world applications. Another challenge arises from the need to deploy deep learning models on resource-constrained devices. Running large neural networks on edge hardware requires reducing model size and computational cost, which can often lead to performance loss. In addition, limited memory and processing power make training or updating models on such devices difficult. This research aims to develop robust and efficient defense mechanisms against adversarial attacks while considering the constraints of real-world deployment. The study will focus on understanding adversarial vulnerabilities, designing effective detection and defense methods, and developing robust yet computationally efficient frameworks. Techniques such as adversarial training, robust optimization, anomaly detection, and explainable AI will be explored to build more reliable and secure deep learning systems.