TY - JOUR
T1 - Revisiting the fragility of influence functions
AU - Epifano, Jacob R.
AU - Ramachandran, Ravi P.
AU - Masino, Aaron J.
AU - Rasool, Ghulam
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/5
Y1 - 2023/5
N2 - In the last few years, many works have tried to explain the predictions of deep learning models. Few methods, however, have been proposed to verify the accuracy or faithfulness of these explanations. Recently, influence functions, which is a method that approximates the effect that leave-one-out training has on the loss function, has been shown to be fragile. The proposed reason for their fragility remains unclear. Although previous work suggests the use of regularization to increase robustness, this does not hold in all cases. In this work, we seek to investigate the experiments performed in the prior work in an effort to understand the underlying mechanisms of influence function fragility. First, we verify influence functions using procedures from the literature under conditions where the convexity assumptions of influence functions are met. Then, we relax these assumptions and study the effects of non-convexity by using deeper models and more complex datasets. Here, we analyze the key metrics and procedures that are used to validate influence functions. Our results indicate that the validation procedures may cause the observed fragility.
AB - In the last few years, many works have tried to explain the predictions of deep learning models. Few methods, however, have been proposed to verify the accuracy or faithfulness of these explanations. Recently, influence functions, which is a method that approximates the effect that leave-one-out training has on the loss function, has been shown to be fragile. The proposed reason for their fragility remains unclear. Although previous work suggests the use of regularization to increase robustness, this does not hold in all cases. In this work, we seek to investigate the experiments performed in the prior work in an effort to understand the underlying mechanisms of influence function fragility. First, we verify influence functions using procedures from the literature under conditions where the convexity assumptions of influence functions are met. Then, we relax these assumptions and study the effects of non-convexity by using deeper models and more complex datasets. Here, we analyze the key metrics and procedures that are used to validate influence functions. Our results indicate that the validation procedures may cause the observed fragility.
UR - http://www.scopus.com/inward/record.url?scp=85151323669&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85151323669&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2023.03.029
DO - 10.1016/j.neunet.2023.03.029
M3 - Article
C2 - 37011460
AN - SCOPUS:85151323669
SN - 0893-6080
VL - 162
SP - 581
EP - 588
JO - Neural Networks
JF - Neural Networks
ER -