Neural Nets: Back propagation intuition

As I was preparing for a discussion with a group of friends on back propagation, I came across this video by professor Patrick Winston and I could not be happier. In only about 40 min, Prof Winston gives the most impressive intuition ever about what backpropagation is. Prof Winston starts from the definition of the problem and then follows through with the representation of the inputs. he uses a not normal computational graph to show the forward propagation of the inputs up to the outputs of the loss function. And then in a pure calculus demonstration, he evaluates the different partial derivatives on each gate of his computational graph as a way to evaluate the impact caused by a small change in the input on the output to the immediate gate. this is not new since this is what backpropagation is about, but what makes this course unique is the sense of humour from the professor and a number of unique observations that Prof Winston makes on each step. he ends the session by saying something very important, I will quote him:

When you spend your time thinking about input representation of your problem, it turns out that the curve feeding is easier.

That’s a genius observation and most of the people fail to understand this basic principle and as a result, they can’t produce any computational graph that simplifies their problem.

Overall this is a good video and I am glad I came across it. it does not cover the regularization part of the optimization problem, but I guess this is due to the scope of the video.

I have embedded the video below for those who might be interested in watching it. Enjoy.

Leave a Reply

Your email address will not be published. Required fields are marked *