TY - JOUR
T1 - Learn++ .NC
T2 - Combining ensemble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes
AU - Muhlbaier, Michael D.
AU - Topalis, Apostolos
AU - Polikar, Robi
N1 - Funding Information:
Manuscript received May 11, 2007; revised November 16, 2007 and April 25, 2008; accepted June 12, 2008. First published December 22, 2008; current version published January 05, 2009. This work was supported by the U.S. National Science Foundation under Grant ECS 0239090.
PY - 2009
Y1 - 2009
N2 - We have previously introduced an incremental learning algorithm Learn++, which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn++ suffers from an inherent "outvoting" problem when asked to learn a new class ωnew introduced by a subsequent data set, as earlier classifiers not trained on this class are guaranteed to misclassify ωnew instances. The collective votes of earlier classifiers, for an inevitably incorrect decision, then outweigh the votes of the new classifiers' correct decision on ωnew instances-until there are enough new classifiers to counteract the unfair outvoting. This forces Learn++ to generate an unnecessarily large number of classifiers. This paper describes Learn++.NC, specifically designed for efficient incremental learning of multiple New Classes using significantly fewer classifiers. To do so, Learn++.NC introduces dynamically weighted consult and vote (DW-CAV), a novel voting mechanism for combining classifiers: individual classifiers consult with each other to determine which ones are most qualified to classify a given instance, and decide how much weight, if any, each classifier's decision should carry. Experiments on real-world problems indicate that the new algorithm performs remarkably well with substantially fewer classifiers, not only as compared to its predecessor Learn++, but also as compared to several other algorithms recently proposed for similar problems.
AB - We have previously introduced an incremental learning algorithm Learn++, which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn++ suffers from an inherent "outvoting" problem when asked to learn a new class ωnew introduced by a subsequent data set, as earlier classifiers not trained on this class are guaranteed to misclassify ωnew instances. The collective votes of earlier classifiers, for an inevitably incorrect decision, then outweigh the votes of the new classifiers' correct decision on ωnew instances-until there are enough new classifiers to counteract the unfair outvoting. This forces Learn++ to generate an unnecessarily large number of classifiers. This paper describes Learn++.NC, specifically designed for efficient incremental learning of multiple New Classes using significantly fewer classifiers. To do so, Learn++.NC introduces dynamically weighted consult and vote (DW-CAV), a novel voting mechanism for combining classifiers: individual classifiers consult with each other to determine which ones are most qualified to classify a given instance, and decide how much weight, if any, each classifier's decision should carry. Experiments on real-world problems indicate that the new algorithm performs remarkably well with substantially fewer classifiers, not only as compared to its predecessor Learn++, but also as compared to several other algorithms recently proposed for similar problems.
UR - http://www.scopus.com/inward/record.url?scp=58649083899&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=58649083899&partnerID=8YFLogxK
U2 - 10.1109/TNN.2008.2008326
DO - 10.1109/TNN.2008.2008326
M3 - Article
C2 - 19109088
AN - SCOPUS:58649083899
SN - 1045-9227
VL - 20
SP - 152
EP - 168
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 1
ER -