Gradient descent is a very basic concept in Machine Learning (ML) algorithms.
It is an iterative optimization method for finding a local minimum(a measurement of the error)
of function and thus finding the most optimal set of parameters for a given problem.
We pick a few points at random.
These means picking some set of parameters to the model as an input to start with and then
we measure the error it produces. Then we continue to move by steps down the curve in a given direction.
In every step we continue to measure the error which should be smaller in every step until we rich the minimum of the error which will increase in the next step ahead.
Gradient Descent means push the parameters in a given direction to minimize the error.

Gradient Descent is an algorithm that helps find the best fit line for a given data set in the lowest number of iterations and in the most efficient way.
As we know the line formula is: y = mx + b.
How exactly do we know how to implement this small steps in Gradient descent algorithm?
We need to find and improve(minimize the error) in each iteration the values of
m - line slope
b - intercept
As we know Mean Squared Error (MSE) is a cost measurement so we want to rich the minimum cost of m and b in order for the predicted model to be accurate as possible with respect to the true values of input x and predicted y.
In order to reach the minimum point we need to reduce the steps size as we progress along the line otherwise if the steps are too big we may miss the minimum point and skip to the other side.Now,how do we do that?
We need to calculate the slope in every point which gives us the right direction on the curve.
.
When calculating the b value in every step this slope is actually a derivative of b with respect to the cost function (MSE).When calculating the m value in every step this slope is actually a derivative of m with respect to the cost function (MSE).
Also we need to use a learning rate witch is a "step" with conjunction with the slope.
To calculate the derivative we need to use math and calculus (sounds scary? :) )
We need to find the partial derivative b
We need to find the partial derivative m

Once we have a partial derivative we have direction and we need to take a step forward.
This step is a learning rate.

Lets implement all this theory to python code example:

Gradient Descent is an algorithm that helps find the best fit line for a given data set in the lowest number of iterations and in the most efficient way.
As we know the line formula is: y = mx + b.
How exactly do we know how to implement this small steps in Gradient descent algorithm?
We need to find and improve(minimize the error) in each iteration the values of
m - line slope
b - intercept
As we know Mean Squared Error (MSE) is a cost measurement so we want to rich the minimum cost of m and b in order for the predicted model to be accurate as possible with respect to the true values of input x and predicted y.
In order to reach the minimum point we need to reduce the steps size as we progress along the line otherwise if the steps are too big we may miss the minimum point and skip to the other side.Now,how do we do that?
We need to calculate the slope in every point which gives us the right direction on the curve.
.
When calculating the b value in every step this slope is actually a derivative of b with respect to the cost function (MSE).When calculating the m value in every step this slope is actually a derivative of m with respect to the cost function (MSE).
Also we need to use a learning rate witch is a "step" with conjunction with the slope.
To calculate the derivative we need to use math and calculus (sounds scary? :) )
We need to find the partial derivative b
We need to find the partial derivative m

Once we have a partial derivative we have direction and we need to take a step forward.
This step is a learning rate.

Lets implement all this theory to python code example:
import numpy as np def dradient_decent(x, y): m_curr = b_curr = 0 iterations = 10000 n = len(x) learning_rate = 0.008 for i in range(iterations): y_predicted = m_curr * x + b_curr cost = (1/n) * sum([val**2 for val in (y-y_predicted)]) m = -(2/n)*sum(x*(y-y_predicted)) b = -(2/n)*sum(y-y_predicted) m_curr = m_curr - learning_rate * m b_curr = b_curr - learning_rate * b print("m = {}, b = {}, cost = {}, iterations = {}".format(m_curr, b_curr, cost, i)) x = np.array([1, 2, 3, 4, 5]) y = np.array([5, 7, 9, 11, 13]) dradient_decent(x, y)


No comments:
Post a Comment