Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Muhammad Umer, Robi Polikar

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Continual (or 'incremental') learning approaches are employed when additional knowledge or tasks need to be learned from subsequent batches or from streaming data. However these approaches are typically adversary agnostic, i.e., they do not consider the possibility of a malicious attack. In our prior work, we explored the vulnerabilities of Elastic Weight Consolidation (EWC) to the perceptible misinformation. We now explore the vulnerabilities of other regularization-based as well as generative replay-based continual learning algorithms, and also extend the attack to imperceptible misinformation. We show that an intelligent adversary can take advantage of a continual learning algorithm's capabilities of retaining existing knowledge over time, and force it to learn and retain deliberately introduced misinformation. To demonstrate this vulnerability, we inject backdoor attack samples into the training data. These attack samples constitute the misinformation, allowing the attacker to capture control of the model at test time. We evaluate the extent of this vulnerability on both rotated and split benchmark variants of the MNIST dataset under two important domain and class incremental learning scenarios. We show that the adversary can create a 'false memory' about any task by inserting carefully-designed backdoor samples to the test instances of that task thereby controlling the amount of forgetting of any task of its choosing. Perhaps most importantly, we show this vulnerability to be very acute and damaging: the model memory can be easily compromised with the addition of backdoor samples into as little as 1% of the training data, even when the misinformation is imperceptible to human eye.

    Original languageEnglish (US)
    Title of host publicationIJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    ISBN (Electronic)9780738133669
    DOIs
    StatePublished - Jul 18 2021
    Event2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Shenzhen, China
    Duration: Jul 18 2021Jul 22 2021

    Publication series

    NameProceedings of the International Joint Conference on Neural Networks
    Volume2021-July

    Conference

    Conference2021 International Joint Conference on Neural Networks, IJCNN 2021
    Country/TerritoryChina
    CityVirtual, Shenzhen
    Period7/18/217/22/21

    All Science Journal Classification (ASJC) codes

    • Software
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models'. Together they form a unique fingerprint.

    Cite this