Adversarial machine learning has recently risen to prominence due to increased concerns over the vulnerability of machine learning algorithms to malicious attacks. While the impact of malicious poisoning attacks on some popular algorithms, such as deep neural networks, has been well researched, the vulnerability of other approaches has not yet been properly established. In this effort, we explore the vulnerability of unconstrained least squares importance fitting (uLSIF), an algorithm used for computing the importance ratio for covariate shift domain adaptation problems. The uLSIF algorithm is an accurate and efficient technique to compute the importance ratio; however, we show that the approach is susceptible to a poisoning attack, where an intelligent adversary - having full or partial access to the training data - can inject well crafted malicious samples into the training data, resulting in an incorrect estimation of the importance values. Through strategically designed synthetic as well as real world datasets, we demonstrate that importance ratio estimation through uLSIF algorithm can be easily compromised with the insertion of even modest number of attack points into the training data. We also show that incorrect estimation of importance values can then cripple the performance of a subsequent covariate shift adaptation.