Software Courses/Neural network and Deep learning

[Neural Network and Deep Learning] Neural Network Representation

김 정 환 2020. 3. 16. 12:19
반응형

This note is based on Coursera course by Andrew ng.

(It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :))

 

 

 

 

 

Let's see what this image below means.

 

 

 

You saw the overview of single hidden layer NN. Now, let's go through the details of extactly how this neural network computes these ouputs. We've said before that logisitic regression, the circle in logistic regression, really represents two steps of computation row. We compute z as follows, and we compute the activation as a sigmoid function of z. 

 

 

 

 

So, a neural network just does this a lot more times. Similar to logistic regression on the left, this node in the hidden layers does two steps of computation.

 

 

 

 

We are goind to start by showing how to compute z as a vector, it turns out you could do it as follows. 

 

 

 

We saw how to compute the prediction on a neural network, given a single training example. Now, we see how to vectorize across multiple training examples. The top left four equation comes from the last image, you might get the equation if you done calculation. If we have an unvectorized implementation and want to compute the predictions of all our training examples, we need to do for i = 1 to m. (z[1](i), i refers to the number of examples.)

 

 

반응형