TY - GEN
T1 - Self-Assessment and Robust Anomaly Detection with Bayesian Deep Learning
AU - Carannante, Giuseppina
AU - Dera, Dimah
AU - Aminul, Orune
AU - Bouaynaya, Nidhal C.
AU - Rasool, Ghulam
N1 - Publisher Copyright:
© 2022 International Society of Information Fusion.
PY - 2022
Y1 - 2022
N2 - Deep Learning (DL) models have achieved or even surpassed human-level accuracy in several areas, including computer vision and pattern recognition. The state-of-art performance of DL models has raised the interest in using them in real-world applications, such as disease diagnosis and clinical decision support systems. However, the challenge remains the lack of trustworthiness and reliability of these DL models. The detection of incorrect decisions or flagging suspicious input samples is essential for the reliability of machine learning models. Uncertainty estimation in the output decision is a key component in establishing the trustworthiness and reliability of these models. In this work, we use Bayesian techniques to estimate the uncertainty in the model's output and use this uncertainty to detect distributional shifts linked to both input perturbations and labels shifts. We use the learned uncertainty information (i.e., the variance of the predictive distribution) in two different ways to detect anomalous input samples: 1) a static threshold based on average uncertainty of a model evaluated on the clean test data, and 2) a statistical threshold based on the significant increase in the average uncertainty of the model evaluated on corrupted (anomalous) samples. Our extensive experiments demonstrate that both approaches can detect anomalous samples. We observe that the proposed thresholding techniques can distinguish misclassified examples in the presence of noise, adversarial attacks, anomalies or distributional shifts. For example, when considering corrupted versions of MNIST and CIFAR-10 datasets, the rate of detecting misclassified samples is almost twice as compared to Monte-Carlo-based approaches.
AB - Deep Learning (DL) models have achieved or even surpassed human-level accuracy in several areas, including computer vision and pattern recognition. The state-of-art performance of DL models has raised the interest in using them in real-world applications, such as disease diagnosis and clinical decision support systems. However, the challenge remains the lack of trustworthiness and reliability of these DL models. The detection of incorrect decisions or flagging suspicious input samples is essential for the reliability of machine learning models. Uncertainty estimation in the output decision is a key component in establishing the trustworthiness and reliability of these models. In this work, we use Bayesian techniques to estimate the uncertainty in the model's output and use this uncertainty to detect distributional shifts linked to both input perturbations and labels shifts. We use the learned uncertainty information (i.e., the variance of the predictive distribution) in two different ways to detect anomalous input samples: 1) a static threshold based on average uncertainty of a model evaluated on the clean test data, and 2) a statistical threshold based on the significant increase in the average uncertainty of the model evaluated on corrupted (anomalous) samples. Our extensive experiments demonstrate that both approaches can detect anomalous samples. We observe that the proposed thresholding techniques can distinguish misclassified examples in the presence of noise, adversarial attacks, anomalies or distributional shifts. For example, when considering corrupted versions of MNIST and CIFAR-10 datasets, the rate of detecting misclassified samples is almost twice as compared to Monte-Carlo-based approaches.
UR - https://www.scopus.com/pages/publications/85136603789
UR - https://www.scopus.com/pages/publications/85136603789#tab=citedBy
U2 - 10.23919/FUSION49751.2022.9841358
DO - 10.23919/FUSION49751.2022.9841358
M3 - Conference contribution
AN - SCOPUS:85136603789
T3 - 2022 25th International Conference on Information Fusion, FUSION 2022
BT - 2022 25th International Conference on Information Fusion, FUSION 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th International Conference on Information Fusion, FUSION 2022
Y2 - 4 July 2022 through 7 July 2022
ER -