반응형

Software Courses/Improving Deep Neural Networks 21

[Improving: Hyper-parameter tuning, Regularization and Optimization] Mini-batch gradient descent

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO One thing that makes it more difficult is that Deep Learning tends to work best in the regime of big data. We are able to train ..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Programming - Gradient Checking

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO To be clear that our backward propagation is correct, we are going to use gradient checking. Backpropagation computes the gradie..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Programming - Regularization

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO We will find where the goal keeper should kick the ball so that the French team's players can then hit it with their head. Each ..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Programming - Initialization

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO We are going to initialize the weights to separate the blue dots from the red dots. We will use a 3-layer neural network. And we..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Gradient checking

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO Sometimes we write all back prop equations and we are just not 100% sure if we have got all the details right and internal back ..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Vanishing / Exploding gradients

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO One of the problems of training neural network, especially very deep neural networks, is data vanishing and exploding gradients...

[Improving: Hyper-parameter tuning, Regularization and Optimization] Normalizing inputs

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO When training a neural network, one of the techniques that will spead up training is normalizing inputs. MAIN Why normalize inpu..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Regularization

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO If we suspect our neural network is overfitting our data, that is we have a high variance problem. One of the first things to tr..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Basic recipe for bias and variance problem

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO In the last post, we saw how looking at training error and dev error can help us diagnose whether our algorithm has a bias or a ..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Bias/Variance

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO What are Bias and Variance? And you might have heard this thing called bias-variance trade-off. Let's see what this means. MAIN ..

반응형