TY - JOUR
T1 - LRNAS
T2 - Differentiable Searching for Adversarially Robust Lightweight Neural Architecture
AU - Feng, Yuqi
AU - Lv, Zeqiong
AU - Chen, Hongyang
AU - Gao, Shangce
AU - An, Fengping
AU - Sun, Yanan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2025
Y1 - 2025
N2 - The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance.
AB - The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance.
KW - Adversarial attack
KW - adversarial robustness
KW - lightweight neural architecture
KW - neural architecture search (NAS)
KW - search space
UR - http://www.scopus.com/inward/record.url?scp=86000427216&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2024.3382724
DO - 10.1109/TNNLS.2024.3382724
M3 - 学術論文
C2 - 38568761
AN - SCOPUS:86000427216
SN - 2162-237X
VL - 36
SP - 5629
EP - 5643
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 3
ER -