Nearest Neighbor Classifier

As our first approach, we will develop what we call a Nearest Neighbor Classifier.

This classifier has nothing to do with Convolutional Neural Networks and it is very rarely used in practice, but it will allow us to get an idea about the basic approach to an image classification problem.

The steps for Implementing a Nearest Neighbor Classifier are:

  • Training - Memorize all of the training data
  • Prediction - predict the label of some new image

Using test images, find nearest neighbors

Distance Metric - How do you compare images?

L1 distance (Manhattan Distance) -

    • Compare individual pixels in each image between the training image and test image.

Q/A:

  • Q: With N examples, how fast are training and prediction?
    • A: Train O(1) , predict O(N).
    • This is bad, we want classifiers that are fast at prediction; slow for training is ok
  • Q: What does this look like?
    • A: Since we are taking into account only one of the nearest neighbors, we don’t get a good graph. Indicators of this include the isolated yellow region in the middle, and singular points “owning” a region in a finger-shaped display.
    • This motivates a slight generalization of this algorithm, called K-Nearest Neighbors

results matching ""

    No results matching ""