Data Poisoning Attacks against MRMR

Heng Liu, Gregory Ditzler

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

Many machine learning models lack the consideration that an adversary can alter data at the time of training or testing. Over the past decade, the machine learning models' vulnerability has been a concern and more secure algorithms are needed. Unfortunately, the security of feature selection (FS) remains an under-explored area. There are only a few works that address data poisoning algorithms that are targeted at embedded FS; however, data poisoning techniques targeted at information-theoretic FS do not exist. In this contribution, a novel data poisoning algorithm is proposed that targets failures in minimum Redundancy Maximum Relevance (mRMR). We demonstrate that mRMR can be easily poisoned to select features that would not normally have been selected.

Original languageEnglish (US)
Title of host publication2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2517-2521
Number of pages5
ISBN (Electronic)9781479981311
DOIs
StatePublished - May 2019
Externally publishedYes
Event44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
Duration: May 12 2019May 17 2019

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2019-May
ISSN (Print)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Country/TerritoryUnited Kingdom
CityBrighton
Period5/12/195/17/19

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Data Poisoning Attacks against MRMR'. Together they form a unique fingerprint.

Cite this