TY - GEN
T1 - On Reducing Adversarial Vulnerability with Data Dependent Stochastic Resonance
AU - Schwartz, David
AU - Ditzler, Gregory
N1 - Funding Information:
It is most intriguing that the apparent trend exhibited in Figures 4 and 7 by all defenses except DDSR collapses at the final hidden layer for DDSR alone. Several peculiarities emerge when we compare the geometries of the benign and adversarial latent representations in the convolutional layers of the models tested in this work. First, unlike all other models, the benign representations in the final hidden layer of a DDSR-defended network and their adversarial forms are mapped in opposing directions. Just as perplexing is the discovery that this does not appear to depend on the depth of the neural network in question, as augmenting the undefended model with additional hidden layers yields qualitatively identical results with the same contrast between DDSR and other models. Examination as to whether these anti-correlations can be exploited to detect adversarial examples is already underway. This detection mechanism may be possible if the directions of benign representations in a DDSR’s final layer are aligned for each class, as that would imply that DDSR maps adversarial and benign examples to separable distributions of representations in its final convolutional layer. ACKNOWLEDGMENT For source code, visit https://github.com/dmschwar/ddsr. Pre-trained models are available on request. This work was supported by grants from the Department of Energy #DE-NA0003946 and National Science Foundation’s CAREER #1943552. Haris Iqbal developed the open source software, PlotNeuralNet, that was used to illustrate Figure 1.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Neural networks are vulnerable to adversarial attacks formed by minuscule perturbations to the original data. These perturbations lead to significant performance degradation. Previous works on defenses against adversarial evasion attacks typically involve pre-processing input data at training or testing time, or modifications to the objective function optimized during the training. In contrast, relatively fewer defense methods focus on modifying the topology and functionality of the underlying defended neural network. Additionally, prior theoretical examinations of the geometry of adversarial examples reveal a challenging and intrinsic trade-off between adversarial and benign accuracy. We introduce a novel modification to a traditional feed-forward convolutional neural network that embeds uncertainty within the network's hidden representations in a learned and data-dependent manner. Our proposed alteration renders the network significantly more resilient than comparably computationally expensive alternatives. Further, the empirical investigation of the proposed defense demonstrates that, unlike prior defense techniques that are comparable to state-of-the-art, the stochastic resonance effect improves adversarial accuracy without significant degradation in benign accuracy.
AB - Neural networks are vulnerable to adversarial attacks formed by minuscule perturbations to the original data. These perturbations lead to significant performance degradation. Previous works on defenses against adversarial evasion attacks typically involve pre-processing input data at training or testing time, or modifications to the objective function optimized during the training. In contrast, relatively fewer defense methods focus on modifying the topology and functionality of the underlying defended neural network. Additionally, prior theoretical examinations of the geometry of adversarial examples reveal a challenging and intrinsic trade-off between adversarial and benign accuracy. We introduce a novel modification to a traditional feed-forward convolutional neural network that embeds uncertainty within the network's hidden representations in a learned and data-dependent manner. Our proposed alteration renders the network significantly more resilient than comparably computationally expensive alternatives. Further, the empirical investigation of the proposed defense demonstrates that, unlike prior defense techniques that are comparable to state-of-the-art, the stochastic resonance effect improves adversarial accuracy without significant degradation in benign accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85147792992&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147792992&partnerID=8YFLogxK
U2 - 10.1109/SSCI51031.2022.10022248
DO - 10.1109/SSCI51031.2022.10022248
M3 - Conference contribution
AN - SCOPUS:85147792992
T3 - Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022
SP - 1334
EP - 1341
BT - Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022
A2 - Ishibuchi, Hisao
A2 - Kwoh, Chee-Keong
A2 - Tan, Ah-Hwee
A2 - Srinivasan, Dipti
A2 - Miao, Chunyan
A2 - Trivedi, Anupam
A2 - Crockett, Keeley
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022
Y2 - 4 December 2022 through 7 December 2022
ER -