TY - GEN
T1 - Learning concept drift in nonstationary environments using an ensemble of classifiers based approach
AU - Karnick, Matthew
AU - Ahiskali, Metin
AU - Muhlbaier, Michael D.
AU - Polikar, Robi
PY - 2008
Y1 - 2008
N2 - We describe an ensemble of classifiers based approach for incrementally learning from new data drawn from a distribution that changes in time, i.e., data obtained from a nonstationary environment. Specifically, we generate a new classifier using each additional dataset that becomes available from the changing environment. The classifiers are combined by a modified weighted majority voting, where the weights are dynamically updated based on the classifiers' current and past performances, as well as their age. This mechanism allows the algorithm to track the changing environment by weighting the most recent and relevant classifiers higher. However, it also utilizes old classifiers by assigning them appropriate voting weights should a cyclical environment renders them relevant again. The algorithm learns incrementally, i.e., it does not need access to previously used data. The algorithm is also independent of a specific classifier model, and can be used with any classifier that fits the characteristics of the underlying problem. We describe the algorithm, and compare its performance using several classifier models, and on different environments as a function of time for several values of rate-of-change.
AB - We describe an ensemble of classifiers based approach for incrementally learning from new data drawn from a distribution that changes in time, i.e., data obtained from a nonstationary environment. Specifically, we generate a new classifier using each additional dataset that becomes available from the changing environment. The classifiers are combined by a modified weighted majority voting, where the weights are dynamically updated based on the classifiers' current and past performances, as well as their age. This mechanism allows the algorithm to track the changing environment by weighting the most recent and relevant classifiers higher. However, it also utilizes old classifiers by assigning them appropriate voting weights should a cyclical environment renders them relevant again. The algorithm learns incrementally, i.e., it does not need access to previously used data. The algorithm is also independent of a specific classifier model, and can be used with any classifier that fits the characteristics of the underlying problem. We describe the algorithm, and compare its performance using several classifier models, and on different environments as a function of time for several values of rate-of-change.
UR - http://www.scopus.com/inward/record.url?scp=56349087953&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=56349087953&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2008.4634290
DO - 10.1109/IJCNN.2008.4634290
M3 - Conference contribution
AN - SCOPUS:56349087953
SN - 9781424418213
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 3455
EP - 3462
BT - 2008 International Joint Conference on Neural Networks, IJCNN 2008
T2 - 2008 International Joint Conference on Neural Networks, IJCNN 2008
Y2 - 1 June 2008 through 8 June 2008
ER -