This note is based on Coursera course by Andrew ng.
(It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :))
INTRO
We will learn the pratical aspects of how to make our neural network work well. Ranging from things like hyperparameter tuning to how to set up our data, to how to make sure our optimization algorithm runs quickly so that we get out learning algorithm to learn in a reasonable time.
Making good choices in how we set up out training, development, and test sets can make a huge difference in helping we quickly find a good high performance neural network. When we are starting on a new application, it is almost impossible to correctly guess the right values for all hyper parameters.
MAIN
Setting up our data sets well in terms of our train, development and test sets can make us much more efficient. We will often set dataset up into a train, dev and test sets, and if we have a relatively small dataset, traditionally we divide it into 60% 20% 20%. But if we have a much larger dataset, it is also fine to set our dev and test sets to be much smaller than 20%.
[speific guide-line]
One thing we should know is mismatched train/test distribution. Let say, we are collecting cat pictures. Training set, cat pictures, comes from website. And Dev/test sets come from phone. Turns out a lot of pictures from website is blurrier pictures of cats. But phone pictures have high resolution. Then these two distributions of data may be different.
The rule of thumb is to make sure that the dev and test sets come from the same distribution.
CONCLUSION