AI in EE

AI IN DIVISIONS

AI in Communication Division

AI in EE

AI IN DIVISIONS

AI in Communication Division ​

AI in Communication Division

NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks

Authors: Seokil Ham, Jungwuk Park, Dong-Jun Han, Jaekyun Moon

Conference: Neural Information Processing Systems (NeurIPS) 2023

Abstract:

While multi-exit neural networks are regarded as a promising solution for making efficient inference via early exits, combating adversarial attacks remains a challenging problem. In multi-exit networks, due to the high dependency among different submodels, an adversarial example targeting a specific exit not only degrades the performance of the target exit but also reduces the performance of all other exits concurrently. This makes multi-exit networks highly vulnerable to simple adversarial attacks. In this paper, we propose NEO-KD, a knowledge-distillation-based adversarial training strategy that tackles this fundamental challenge of multi-exit networks with two key contributions. NEO-KD first resorts to neighbor knowledge distillation to guide the output of the adversarial examples to tend to the ensembled outputs of neighbor exits of clean data. NEO-KD also employs exit-wise orthogonal knowledge distillation for reducing adversarial transferability across different submodels. The result is a significantly improved robustness against adversarial attacks. Experimental results on various datasets/models show that our method achieves the best adversarial accuracy with reduced computation budgets, compared to other baselines relying on existing adversarial training or knowledge distillation techniques for multi-exit networks.

 

3