Adversarially Robust Continual Learning

Hikmat Khan, Nidhal Carla Bouaynaya, Ghulam Rasool

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent approaches in continual learning (CL) have focused on extracting various types of features from multi-task datasets to prevent catastrophic forgetting - without formally evaluating the quality, robustness and usefulness of these features. Recently, it has been shown that adversarial robustness can be understood by decomposing learned features into robust and non-robust types. The robust features have been used to build robust datasets and have been shown to increase adversarial robustness significantly. There has not been any assessment on using such robust features in CL frameworks to enhance the robustness of CL models against adversarial attacks. Current CL algorithms use standard features - a mixture of robust and non-robust features - and result in models vulnerable to both natural and adversarial noise. This paper presents an empirical study to demonstrate the importance of robust features in the context of class incremental learning (CIL). We adopted the publicly available CIFAR10 dataset for our CIL experiments. We used CIFAR10-Corrupted dataset to evaluate the robustness of the standard, robust and non-robust models against various types of noise including bright-ness, contrast, Gaussian noise and more. To test these models against adversarially attacked input, we created a new dataset using the project gradient descent (PGD) and fast gradient sign (FGSM) algorithm. Our experiments demonstrate that a set of models trained on the standard (a mixture of both robust and non-robust) features obtained a higher accuracy compared to the models trained either using robust features or non-robust features. However, the models trained using standard and non-robust features performed poorly in noisy and adversarial conditions as compared to the model trained using robust features. The model trained using non-robust features performed the worst in noisy conditions and under adversarial attacks. Our study underlines the significance of using robust features in CIL.

Original languageEnglish (US)
Title of host publication2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728186719
DOIs
StatePublished - 2022
Event2022 International Joint Conference on Neural Networks, IJCNN 2022 - Padua, Italy
Duration: Jul 18 2022Jul 23 2022

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2022-July

Conference

Conference2022 International Joint Conference on Neural Networks, IJCNN 2022
Country/TerritoryItaly
CityPadua
Period7/18/227/23/22

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Adversarially Robust Continual Learning'. Together they form a unique fingerprint.

Cite this