TY - GEN
T1 - Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-trainings
AU - Kokalj-Filipovic, Silvija
AU - Miller, Rob
AU - Chang, Nicholas
AU - Lau, C. L.
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - Adversarial examples in machine learning for images are widely publicized and explored. Illustrations of misclassifications caused by these slightly perturbed inputs are abundant and commonly known (e.g., a picture of panda imperceptibly perturbed to fool the classifier into incorrectly labeling it as a gibbon). Similar attacks on deep learning (DL) for radio frequency (RF) signals and their mitigation strategies are scarcely addressed in the published work. Yet, RF adversarial examples (AdExs) with minimal waveform perturbations can cause drastic, targeted misclassification results, particularly against spectrum sensing/survey applications (e.g. BPSK is mistaken for 8-PSK). Our research on deep learning AdExs and proposed defense mechanisms are RF-centric, and incorporate physical-world, over-the-air (OTA) effects. We herein present defense mechanisms based on pre-training the target classifier using an autoencoder. Our results validate this approach as a viable mitigation method to subvert adversarial attacks against deep learning-based communications and radar sensing systems.
AB - Adversarial examples in machine learning for images are widely publicized and explored. Illustrations of misclassifications caused by these slightly perturbed inputs are abundant and commonly known (e.g., a picture of panda imperceptibly perturbed to fool the classifier into incorrectly labeling it as a gibbon). Similar attacks on deep learning (DL) for radio frequency (RF) signals and their mitigation strategies are scarcely addressed in the published work. Yet, RF adversarial examples (AdExs) with minimal waveform perturbations can cause drastic, targeted misclassification results, particularly against spectrum sensing/survey applications (e.g. BPSK is mistaken for 8-PSK). Our research on deep learning AdExs and proposed defense mechanisms are RF-centric, and incorporate physical-world, over-the-air (OTA) effects. We herein present defense mechanisms based on pre-training the target classifier using an autoencoder. Our results validate this approach as a viable mitigation method to subvert adversarial attacks against deep learning-based communications and radar sensing systems.
UR - https://www.scopus.com/pages/publications/85066630893
UR - https://www.scopus.com/pages/publications/85066630893#tab=citedBy
U2 - 10.1109/ICMCIS.2019.8842663
DO - 10.1109/ICMCIS.2019.8842663
M3 - Conference contribution
AN - SCOPUS:85066630893
T3 - 2019 International Conference on Military Communications and Information Systems, ICMCIS 2019
BT - 2019 International Conference on Military Communications and Information Systems, ICMCIS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Conference on Military Communications and Information Systems, ICMCIS 2019
Y2 - 14 May 2019 through 15 May 2019
ER -