TY - GEN
T1 - Adversarially Robust Continual Learning
AU - Khan, Hikmat
AU - Bouaynaya, Nidhal Carla
AU - Rasool, Ghulam
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Recent approaches in continual learning (CL) have focused on extracting various types of features from multi-task datasets to prevent catastrophic forgetting - without formally evaluating the quality, robustness and usefulness of these features. Recently, it has been shown that adversarial robustness can be understood by decomposing learned features into robust and non-robust types. The robust features have been used to build robust datasets and have been shown to increase adversarial robustness significantly. There has not been any assessment on using such robust features in CL frameworks to enhance the robustness of CL models against adversarial attacks. Current CL algorithms use standard features - a mixture of robust and non-robust features - and result in models vulnerable to both natural and adversarial noise. This paper presents an empirical study to demonstrate the importance of robust features in the context of class incremental learning (CIL). We adopted the publicly available CIFAR10 dataset for our CIL experiments. We used CIFAR10-Corrupted dataset to evaluate the robustness of the standard, robust and non-robust models against various types of noise including bright-ness, contrast, Gaussian noise and more. To test these models against adversarially attacked input, we created a new dataset using the project gradient descent (PGD) and fast gradient sign (FGSM) algorithm. Our experiments demonstrate that a set of models trained on the standard (a mixture of both robust and non-robust) features obtained a higher accuracy compared to the models trained either using robust features or non-robust features. However, the models trained using standard and non-robust features performed poorly in noisy and adversarial conditions as compared to the model trained using robust features. The model trained using non-robust features performed the worst in noisy conditions and under adversarial attacks. Our study underlines the significance of using robust features in CIL.
AB - Recent approaches in continual learning (CL) have focused on extracting various types of features from multi-task datasets to prevent catastrophic forgetting - without formally evaluating the quality, robustness and usefulness of these features. Recently, it has been shown that adversarial robustness can be understood by decomposing learned features into robust and non-robust types. The robust features have been used to build robust datasets and have been shown to increase adversarial robustness significantly. There has not been any assessment on using such robust features in CL frameworks to enhance the robustness of CL models against adversarial attacks. Current CL algorithms use standard features - a mixture of robust and non-robust features - and result in models vulnerable to both natural and adversarial noise. This paper presents an empirical study to demonstrate the importance of robust features in the context of class incremental learning (CIL). We adopted the publicly available CIFAR10 dataset for our CIL experiments. We used CIFAR10-Corrupted dataset to evaluate the robustness of the standard, robust and non-robust models against various types of noise including bright-ness, contrast, Gaussian noise and more. To test these models against adversarially attacked input, we created a new dataset using the project gradient descent (PGD) and fast gradient sign (FGSM) algorithm. Our experiments demonstrate that a set of models trained on the standard (a mixture of both robust and non-robust) features obtained a higher accuracy compared to the models trained either using robust features or non-robust features. However, the models trained using standard and non-robust features performed poorly in noisy and adversarial conditions as compared to the model trained using robust features. The model trained using non-robust features performed the worst in noisy conditions and under adversarial attacks. Our study underlines the significance of using robust features in CIL.
UR - http://www.scopus.com/inward/record.url?scp=85140751326&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140751326&partnerID=8YFLogxK
U2 - 10.1109/IJCNN55064.2022.9892970
DO - 10.1109/IJCNN55064.2022.9892970
M3 - Conference contribution
AN - SCOPUS:85140751326
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Joint Conference on Neural Networks, IJCNN 2022
Y2 - 18 July 2022 through 23 July 2022
ER -