TY - GEN
T1 - An ensemble approach for incremental learning in nonstationary environments
AU - Muhlbaier, Michael D.
AU - Polikar, Robi
N1 - Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2007
Y1 - 2007
N2 - We describe an ensemble of classifiers based algorithm for incremental learning in nonstationary environments. In this formulation, we assume that the learner is presented with a series of training datasets, each of which is drawn from a different snapshot of a distribution that is drifting at an unknown rate. Furthermore, we assume that the algorithm must learn the new environment in an incremental manner, that is, without having access to previously available data. Instead of a time window over incoming instances, or an aged based forgetting - as used by most ensemble based nonstationary learning algorithms - a strategic weighting mechanism is employed that tracks the classifiers' performances over drifting environments to determine appropriate voting weights. Specifically, the proposed approach generates a single classifier for each dataset that becomes available, and then combines them through a dynamically modified weighted majority voting, where the voting weights themselves are computed as weighted averages of classifiers' individual performances over all environments. We describe the implementation details of this approach, as well as its initial results on simulated non-stationary environments.
AB - We describe an ensemble of classifiers based algorithm for incremental learning in nonstationary environments. In this formulation, we assume that the learner is presented with a series of training datasets, each of which is drawn from a different snapshot of a distribution that is drifting at an unknown rate. Furthermore, we assume that the algorithm must learn the new environment in an incremental manner, that is, without having access to previously available data. Instead of a time window over incoming instances, or an aged based forgetting - as used by most ensemble based nonstationary learning algorithms - a strategic weighting mechanism is employed that tracks the classifiers' performances over drifting environments to determine appropriate voting weights. Specifically, the proposed approach generates a single classifier for each dataset that becomes available, and then combines them through a dynamically modified weighted majority voting, where the voting weights themselves are computed as weighted averages of classifiers' individual performances over all environments. We describe the implementation details of this approach, as well as its initial results on simulated non-stationary environments.
UR - http://www.scopus.com/inward/record.url?scp=34548217482&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34548217482&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-72523-7_49
DO - 10.1007/978-3-540-72523-7_49
M3 - Conference contribution
AN - SCOPUS:34548217482
SN - 9783540724810
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 490
EP - 500
BT - Multiple Classifier Systems - 7th International Workshop, MCS 2007, Proceedings
PB - Springer Verlag
T2 - 7th International Workshop on Multiple Classifier Systems, MCS 2007
Y2 - 23 May 2007 through 25 May 2007
ER -