Vulnerability of Covariate Shift Adaptation against Malicious Poisoning Attacks

Muhammad Umer, Christopher Frederickson, Robi Polikar

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Scopus citations

    Abstract

    Adversarial machine learning has recently risen to prominence due to increased concerns over the vulnerability of machine learning algorithms to malicious attacks. While the impact of malicious poisoning attacks on some popular algorithms, such as deep neural networks, has been well researched, the vulnerability of other approaches has not yet been properly established. In this effort, we explore the vulnerability of unconstrained least squares importance fitting (uLSIF), an algorithm used for computing the importance ratio for covariate shift domain adaptation problems. The uLSIF algorithm is an accurate and efficient technique to compute the importance ratio; however, we show that the approach is susceptible to a poisoning attack, where an intelligent adversary - having full or partial access to the training data - can inject well crafted malicious samples into the training data, resulting in an incorrect estimation of the importance values. Through strategically designed synthetic as well as real world datasets, we demonstrate that importance ratio estimation through uLSIF algorithm can be easily compromised with the insertion of even modest number of attack points into the training data. We also show that incorrect estimation of importance values can then cripple the performance of a subsequent covariate shift adaptation.

    Original languageEnglish (US)
    Title of host publication2019 International Joint Conference on Neural Networks, IJCNN 2019
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    ISBN (Electronic)9781728119854
    DOIs
    StatePublished - Jul 2019
    Event2019 International Joint Conference on Neural Networks, IJCNN 2019 - Budapest, Hungary
    Duration: Jul 14 2019Jul 19 2019

    Publication series

    NameProceedings of the International Joint Conference on Neural Networks
    Volume2019-July

    Conference

    Conference2019 International Joint Conference on Neural Networks, IJCNN 2019
    CountryHungary
    CityBudapest
    Period7/14/197/19/19

    All Science Journal Classification (ASJC) codes

    • Software
    • Artificial Intelligence

    Fingerprint Dive into the research topics of 'Vulnerability of Covariate Shift Adaptation against Malicious Poisoning Attacks'. Together they form a unique fingerprint.

    Cite this