TY - GEN
T1 - Vulnerability of Covariate Shift Adaptation against Malicious Poisoning Attacks
AU - Umer, Muhammad
AU - Frederickson, Christopher
AU - Polikar, Robi
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Adversarial machine learning has recently risen to prominence due to increased concerns over the vulnerability of machine learning algorithms to malicious attacks. While the impact of malicious poisoning attacks on some popular algorithms, such as deep neural networks, has been well researched, the vulnerability of other approaches has not yet been properly established. In this effort, we explore the vulnerability of unconstrained least squares importance fitting (uLSIF), an algorithm used for computing the importance ratio for covariate shift domain adaptation problems. The uLSIF algorithm is an accurate and efficient technique to compute the importance ratio; however, we show that the approach is susceptible to a poisoning attack, where an intelligent adversary - having full or partial access to the training data - can inject well crafted malicious samples into the training data, resulting in an incorrect estimation of the importance values. Through strategically designed synthetic as well as real world datasets, we demonstrate that importance ratio estimation through uLSIF algorithm can be easily compromised with the insertion of even modest number of attack points into the training data. We also show that incorrect estimation of importance values can then cripple the performance of a subsequent covariate shift adaptation.
AB - Adversarial machine learning has recently risen to prominence due to increased concerns over the vulnerability of machine learning algorithms to malicious attacks. While the impact of malicious poisoning attacks on some popular algorithms, such as deep neural networks, has been well researched, the vulnerability of other approaches has not yet been properly established. In this effort, we explore the vulnerability of unconstrained least squares importance fitting (uLSIF), an algorithm used for computing the importance ratio for covariate shift domain adaptation problems. The uLSIF algorithm is an accurate and efficient technique to compute the importance ratio; however, we show that the approach is susceptible to a poisoning attack, where an intelligent adversary - having full or partial access to the training data - can inject well crafted malicious samples into the training data, resulting in an incorrect estimation of the importance values. Through strategically designed synthetic as well as real world datasets, we demonstrate that importance ratio estimation through uLSIF algorithm can be easily compromised with the insertion of even modest number of attack points into the training data. We also show that incorrect estimation of importance values can then cripple the performance of a subsequent covariate shift adaptation.
UR - http://www.scopus.com/inward/record.url?scp=85073252409&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85073252409&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2019.8851748
DO - 10.1109/IJCNN.2019.8851748
M3 - Conference contribution
AN - SCOPUS:85073252409
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2019 International Joint Conference on Neural Networks, IJCNN 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Joint Conference on Neural Networks, IJCNN 2019
Y2 - 14 July 2019 through 19 July 2019
ER -