반응형

Software Courses/Neural network and Deep learning 15

[Neural Network and Deep Learning] Hyper parameters

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO What are hyperparameters? MAIN Parameters our model are W and b, and there are other things we need to know such as the learning..

[Neural Network and Deep Learning] Building blocks of deep neural networks

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO We have already seen the basic building blocks of forward propagation and back propagtaion, the key components we need to implem..

[Neural Network and Deep Learning] Why deep representations?

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO We have all been hearing that deep neural network work really well for a lot of problems, and it's not just that they needd to b..

[Neural Network and Deep Learning] Forward Propagation in a Deep Network

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) Let's see how we can perform forward propagation in a deep network. Given a single training example x, here is how we compute the acti..

[Neural Network and Deep Learning] Deep L-layer neural network

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) By now, we have actually seen most of the ideas we need to implement a deep neural network. Forward propagation, back propagation with..

[Neural Network and Deep Learning] Random Initialization

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) When we change our neural network, it's import to initialize the weights randomly. For logistic regression, it was okay to initialize ..

[Neural Network and Deep Learning] Derivatives of activation functions

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) When we implement back propagation for our neural network, we need to compute the derivative of the activation functions. So, let's ta..

[Neural Network and Deep Learning] Activation functions

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) When we build our neural network, one of the choices you get to make is what activation function to use in the hidden layers, as well ..

[Neural Network and Deep Learning] Neural Network Representation

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) Let's see what this image below means. You saw the overview of single hidden layer NN. Now, let's go through the details of extactly h..

[Neural Network and Deep Learning] Vectorization

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) In the deep learning era, we usually find training on relatively large data sets, because that is when deep learning algorithms tend t..

반응형