TY - GEN
T1 - Margin-based discriminant dimensionality reduction for visual recognition
AU - Cevikalp, Hakan
AU - Jurie, Frédéric
AU - Triggs, Bill
AU - Polikar, Robi
PY - 2008
Y1 - 2008
N2 - Nearest neighbour classifiers and related kernel methods often perform poorly in high dimensional problems because it is infeasible to include enough training samples to cover the class regions densely. In such cases, test samples often fall into gaps between training samples where the nearest neighbours are too distant to be good indicators of class membership. One solution is to project the data onto a discriminative lower dimensional subspace. We propose a gap-resistant nonparametric method for finding such subspaces: first the gaps are filled by building a convex model of the region spanned by each class - we test the affine and convex hulls and the bounding disk of the class training samples - then a set of highly discriminative directions is found by building and decomposing a scatter matrix of weighted displacement vectors from training examples to nearby rival class regions. The weights are chosen to focus attention on narrow margin cases while still allowing more diversity and hence more discriminability than the 1D linear Support Vector Machine (SVM) projection. Experimental results on several face and object recognition datasets show that the method finds effective projections, allowing simple classifiers such as nearest neighbours to work well in the low dimensional reduced space.
AB - Nearest neighbour classifiers and related kernel methods often perform poorly in high dimensional problems because it is infeasible to include enough training samples to cover the class regions densely. In such cases, test samples often fall into gaps between training samples where the nearest neighbours are too distant to be good indicators of class membership. One solution is to project the data onto a discriminative lower dimensional subspace. We propose a gap-resistant nonparametric method for finding such subspaces: first the gaps are filled by building a convex model of the region spanned by each class - we test the affine and convex hulls and the bounding disk of the class training samples - then a set of highly discriminative directions is found by building and decomposing a scatter matrix of weighted displacement vectors from training examples to nearby rival class regions. The weights are chosen to focus attention on narrow margin cases while still allowing more diversity and hence more discriminability than the 1D linear Support Vector Machine (SVM) projection. Experimental results on several face and object recognition datasets show that the method finds effective projections, allowing simple classifiers such as nearest neighbours to work well in the low dimensional reduced space.
UR - http://www.scopus.com/inward/record.url?scp=51949093067&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=51949093067&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2008.4587591
DO - 10.1109/CVPR.2008.4587591
M3 - Conference contribution
AN - SCOPUS:51949093067
SN - 9781424422432
T3 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
BT - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
T2 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
Y2 - 23 June 2008 through 28 June 2008
ER -