반응형

Software Courses/Improving Deep Neural Networks 21

[Improving: Hyper-parameter tuning, Regularization and Optimization] Multi-class classification - softmax classifier

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO The name Softmax comes from constrasting it to a Hardmax which would have taken the vector Z and matched it to vector like this ..

Improving: Hyper-parameter tuning, Regularization and Optimization] Batch Normalization

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO Batch normalization makes hyperparameter search problem much easier and makes neural network much more robust. The choice of hyp..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Hyperparameters tuning

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO How do we find a good setting for these hyperparameters? MAIN Tuning process One of the painful things about training deep net i..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Programming - Optimization(Gradient Descent, Mini-batch, Momentum, Adam)

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) Gradient descent A simple optimization method in machine learning is gradient descent(GD). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 1..

[Improving: Hyper-parameter tuning, Regularization and Optimization] The Problem of local optima

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO People used to worry a lot about the optimization algorithm getting stuck in bad local optima. But as this theory of deep learni..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Learning rate decay

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO One of the things that might help spead up learning algorithm is slowly reduce learning rate over time. Let's start with an exam..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Optimization - Adam

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO Adam stands for Adaptive Moment Estimation. The Adam optimization algorithm is basically taking Momentum and RMSprop and putting..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Optimization - RMSprop

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO RMSprop means Root Mean Square prop that can also spead up gradient descent. MAIN In order to provide intuition for this example..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Optimization - Momentum

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO MAIN WHAT If we take steps of gradient descent, it slowly oscillate toward the minimum. This up and down oscillation slows down ..

[Improving: Hyper-parameter tuning, Regularization and Optimization] Exponentially weighted averages

This note is based on Coursera course by Andrew ng. (It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :)) INTRO There are a few optimization algorithms that are faster than gradient descent. In order to understand those algorithms, we need ..

반응형