Software Courses/Neural network and Deep learning

[Neural Network and Deep Learning] Vectorization

김 정 환 2020. 3. 10. 12:10
반응형

This note is based on Coursera course by Andrew ng.

(It is just study note for me. It could be copied or awkward sometimes for sentence anything, because i am not native. But, i want to learn Deep Learning on English. So, everything will be bettter and better :))

 

 

In the deep learning era, we usually find training on relatively large data sets, because that is when deep learning algorithms tend to shine. So, the ability to perform vectorization has become a key skill. 

 

What is vectorization? In logitic regression you need to compute Z equals W transpose X plus B, where W was this colum vector and X is also this vector. To compute W transpose X, we can have non-vectorized implementation or vectorized implementation. 

Python code

 

 

 

 

 

Let's compare how different they perform in Jupyternotebook. In both cases, the vectorized version and the non-vectorizedd version computed the same values. The vectorized version took 36 milliseconds. The explicit for-loop and non-vectorized version took about 1278 ms. So, the non-vectorized version took 35 times longer than the vectorized version. (The value of time can be different according to one's computer performance) It will be much faster if we vectoized our code. 

 

 

 

 

 

The rule of thumb to keep in mind is, when we are porgramming our new networks, or when we are programming just a regression, whenever possible avoid explicit for-loops. We will often go faster than if we have an explicit for-loop. Let's take a look more vectorization examples.

Ex) say we need to apply the exponential operation on every element of a matrix/vector.

 

Ex) Logistic regression derivatives

 

 

 

 

 

In that logistic regression derivatives, we vectorize the implementation. Let's talk more about Vectorzing logistic regression in detail. First examine the forward propagation steps of logistic regression. Instead of needing to loop over m training examples to compute z and a, one of the time, we can implement two easy lines(red) of code below

 

 

 

 

 

Now we see how we can use vectorization to also perform the gradient computations for all m training samples. As you see logistic regression derivates using two for-loops, we will get rid of these two for-loops. The right for-loop is for iteration. 

 

반응형