next up previous contents
Next: Confidence Intervals Up: Choice of Classifiers Previous: Back-Propagation Neural Networks

k-Nearest Neighbor Classifiers

Because a typical BPNN implementation has several parameters that must be chosen, a k-nearest neighbor (kNN) classifier (requiring the selection of a single parameter) was implemented to complement the BPNN results. One advantage of the kNN classifier is its intuitive operation. First, the distances between a single test sample and each of the training samples are calculated. The training samples closest to that test sample are defined as its ``nearest neighbors''. The test sample is then assigned to the class from which a plurality of it's k nearest neighbors are from, where k is typically an integer less than 10.

In addition to having to optimize the selection of only a single parameter (k), the theoretical basis of the kNN classifier is well described [16]. The other major attraction of the kNN classifier is the fact that its asymptotic performance (as the number of training samples $\rightarrow\infty$) has been shown to be bounded by twice that of a Bayes classifier for the same data [17]. While the appeal of the kNN classifier lies in its simplicity and intuitive nature (i.e., new samples are assumed to belong to the same class as the training samples which are closest to them in the feature space), it is able to generate only piecewise-linear decision boundaries and is therefore not able to perform as well as the BPNN in practice.


next up previous contents
Next: Confidence Intervals Up: Choice of Classifiers Previous: Back-Propagation Neural Networks
Copyright ©1999 Michael V. Boland
1999-09-18