This note is based on Coursera course by Andrew ng.
(It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :))
The computations of a neural network are organized in terms of a forward propagation step, in which we compute the output of the neural network, followed by a backward propagation step, which we use to compute gradients or compute derivatives. The computation graph explains why it is organized this way.
This explains who forward propagation doing and we get output.
Let's take a clean how we can use that to figure out derivative calculations for that function J. Let's say we want to compute the derivative of J with respect to v. The derivative of J with respect to v is equal to 3. Because the increase in J is 3 times the increase in v.
Now, let's look at another example. What is id dJ/da? In other words, if we bump up the value of a, how does that affect the value of J? The change to a will propagate to the right of the computation graph. It says that if you change a, then that will change v. And through changing v, that whould change J. This is called the chain rule that if a affects v, affects J, then the amounts that J changes when you nudge a is the product of how much v changes when you nudge a times how much J changes when you nudge v.
So, we can get dJ/da from dJ/dv and dv/da.
Let's keep computing derivatives. Let's look at the value b and value c.