Lecture 5 - Convolutional Neural Networks

History, Convolution and pooling, ConvNets outside vision

[slides] [video]

ConvNet notes

Last time: we talked about neural networks and how neural networks and how we have the running example of the linear score function. Then we turned this into a neural network buy stacking these linear score functions, with non linearities in between. We also saw how this can address the mode problem, where we are able to learn intermediate templates that we’re looking for for example, different types of cars, a red car vs a yellow car, And to combine these together to come up with the final score function for each class.

Today we’ll talk about CNNs, same idea but now we’re going to learn convolutional layers that reason on top of basically explicitly trying to maintain spatial structure.

How are ConvNets different from normal Neural Networks?

ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network.

History

Mark I Perceptron - 1957 - Frank Rosenblat

Widow and Hoff - 1960 - Idea of Neural Network

1986 - Backprop

results matching ""

    No results matching ""