Robust Explainability: A tutorial on gradient-based attribution methods for deep neural networks

Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Ravi P. Ramachandran, Nidhal Carla Bouaynaya

Research output: Contribution to journalArticlepeer-review

72 Scopus citations

Abstract

The rise in deep neural networks (DNNs) has led to increased interest in explaining their predictions. While many methods for this exist, there is currently no consensus on how to evaluate them. On the other hand, robustness is a popular topic for deep learning (DL) research; however, it has been hardly talked about in explainability until very recently.

Original languageEnglish (US)
Pages (from-to)73-84
Number of pages12
JournalIEEE Signal Processing Magazine
Volume39
Issue number4
DOIs
StatePublished - Jul 1 2022

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Robust Explainability: A tutorial on gradient-based attribution methods for deep neural networks'. Together they form a unique fingerprint.

Cite this