TY - GEN
T1 - Self-Assessment and Robust Anomaly Detection with Bayesian Deep Learning
AU - Carannante, Giuseppina
AU - Dera, Dimah
AU - Aminul, Orune
AU - Bouaynaya, Nidhal C.
AU - Rasool, Ghulam
N1 - Funding Information:
VI. CONCLUSION AND FUTURE WORK This paper proposed two automated anomaly detection approaches based on the learned uncertainty (or predictive variance) of the variational density propagation frameworks in CNNs. We observed from prior work the growing behavior of uncertainty under noisy conditions. Based on this observation, we proposed two automated thresholds: 1) a static threshold based on the average predictive variance of correctly classified clean test samples (in-distribution samples); 2) a statistical threshold based on the Wilcoxon signed-rank test to detect the significant increase in the average variance under noisy conditions (out-of-distribution samples). We demonstrated in our extensive simulation that the two automated detection thresholds were able to recognize the covariate shifts, including Gaussian noise, targeted and non-targeted adversarial attacks, and corrupted MNIST and CIFAR-10 test samples (with 15 different types of corruptions). The misclassified samples by these different types of perturbations were detected with high rates by the static and statistical thresholds. The two thresholds also detected the semantic shifts when we trained the models on the MNIST and CIFAR-10 datasets and then tested them on the Fashion MNIST and SVHN datasets, respectively. In the future, we plan to test the two proposed thresholds with more datasets and larger network architectures. We also plan to investigate the possibility of learning the detection threshold during training, which may result in a higher detection rate. ACKNOWLEDGMENT This work was supported by the National Science Foundation awards NSF CRII-2153413, NSF ECCS-1903466 and NSF OAC-2008690. We are also grateful to UK EPSRC support through EP/T013265/1 project NSF-EPSRC: ShiRAS. Towards Safe and Reliable Autonomy in Sensor Driven Systems, and NJ Health Foundation support through Award number PC 78-21. REFERENCES
Publisher Copyright:
© 2022 International Society of Information Fusion.
PY - 2022
Y1 - 2022
N2 - Deep Learning (DL) models have achieved or even surpassed human-level accuracy in several areas, including computer vision and pattern recognition. The state-of-art performance of DL models has raised the interest in using them in real-world applications, such as disease diagnosis and clinical decision support systems. However, the challenge remains the lack of trustworthiness and reliability of these DL models. The detection of incorrect decisions or flagging suspicious input samples is essential for the reliability of machine learning models. Uncertainty estimation in the output decision is a key component in establishing the trustworthiness and reliability of these models. In this work, we use Bayesian techniques to estimate the uncertainty in the model's output and use this uncertainty to detect distributional shifts linked to both input perturbations and labels shifts. We use the learned uncertainty information (i.e., the variance of the predictive distribution) in two different ways to detect anomalous input samples: 1) a static threshold based on average uncertainty of a model evaluated on the clean test data, and 2) a statistical threshold based on the significant increase in the average uncertainty of the model evaluated on corrupted (anomalous) samples. Our extensive experiments demonstrate that both approaches can detect anomalous samples. We observe that the proposed thresholding techniques can distinguish misclassified examples in the presence of noise, adversarial attacks, anomalies or distributional shifts. For example, when considering corrupted versions of MNIST and CIFAR-10 datasets, the rate of detecting misclassified samples is almost twice as compared to Monte-Carlo-based approaches.
AB - Deep Learning (DL) models have achieved or even surpassed human-level accuracy in several areas, including computer vision and pattern recognition. The state-of-art performance of DL models has raised the interest in using them in real-world applications, such as disease diagnosis and clinical decision support systems. However, the challenge remains the lack of trustworthiness and reliability of these DL models. The detection of incorrect decisions or flagging suspicious input samples is essential for the reliability of machine learning models. Uncertainty estimation in the output decision is a key component in establishing the trustworthiness and reliability of these models. In this work, we use Bayesian techniques to estimate the uncertainty in the model's output and use this uncertainty to detect distributional shifts linked to both input perturbations and labels shifts. We use the learned uncertainty information (i.e., the variance of the predictive distribution) in two different ways to detect anomalous input samples: 1) a static threshold based on average uncertainty of a model evaluated on the clean test data, and 2) a statistical threshold based on the significant increase in the average uncertainty of the model evaluated on corrupted (anomalous) samples. Our extensive experiments demonstrate that both approaches can detect anomalous samples. We observe that the proposed thresholding techniques can distinguish misclassified examples in the presence of noise, adversarial attacks, anomalies or distributional shifts. For example, when considering corrupted versions of MNIST and CIFAR-10 datasets, the rate of detecting misclassified samples is almost twice as compared to Monte-Carlo-based approaches.
UR - http://www.scopus.com/inward/record.url?scp=85136603789&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85136603789&partnerID=8YFLogxK
U2 - 10.23919/FUSION49751.2022.9841358
DO - 10.23919/FUSION49751.2022.9841358
M3 - Conference contribution
AN - SCOPUS:85136603789
T3 - 2022 25th International Conference on Information Fusion, FUSION 2022
BT - 2022 25th International Conference on Information Fusion, FUSION 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th International Conference on Information Fusion, FUSION 2022
Y2 - 4 July 2022 through 7 July 2022
ER -