This note is based on Coursera course by Andrew ng.
(It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :))
INTRO
In the last post, we saw how looking at training error and dev error can help us diagnose whether our algorithm has a bias or a variance problem, or maybe both. It turns out that this information that lets us moch more systematically using basic recipe for machine learning and improving our algorithms' performance.
MAIN
After having trained an initial model, we will first ask "does our algorithm have high bias?". If it does have a high bias, we could try to pick a bigger network, or train longer, or find a new network architecture.
Once we reduce bias to acceptable amounts, then ask "does our algorithm hava a variance problem?". If it has a high variance problem, the the best way to solve it is to get more data. Or we could try regularization.
If we have high bias and apply to get more data, it is not going to help. So being clear on how much of a bias problem or variance problem or both can help us focus on selecting the most useful things to try.
There used to be a lot of discussion on what is called the bias-variance tradeoff. The reason for that was that we could increase bias and reduce variance, reduce bias and increase variance. Because in the pre-deep learning era, we didn't have many tools that just reduce bias or that just reduce variance without hurting the other one. But in the modern deep learning era, so long as we can keep training a big network, and so long as we can keep getting more data. If that's the case, then getting a bigger network almost just reduce our bias without necessarily hurting our variance.
CONCLUSION
This is the basic structure of how to organize our machine learning problem to diagnose bias and variance, and try to select the right operation for us to make progress on our problem.