In statistics, linear regression is a linear approach to modelling the relationship between a dependent variable and one or more independent variables. Let be the independent variable and be the dependent variable. We will define a linear relationship between these two variables as follows:
We will use the Mean Squared Error function.
You might know that the partial derivative of a function at its minimum value is equal to 0. So gradient descent basically uses this concept to estimate the parameters or weights of our model by minimizing the loss function.
Initialize the weights, and
Calculate the partial derivatives w.r.t. to and
Update the weights
# Importing librariesimport numpy as npimport pandas as pdfrom sklearn.model_selection import train_test_split# Preparing the datasetdata = pd.DataFrame({'feature' : [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 'label' : [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30]})# Divide the data to training set and test setX_train, X_test, y_train, y_test = train_test_split(data['feature'], data['label'], test_size=0.30)# Method to make predictionsdef predict(X, theta0, theta1):# Here the predict function is: theta0+theta1*xreturn np.array([(theta0 + theta1*x) for x in X])def linear_regression(X, Y):# Initializing variablestheta0 = 0theta1 = 0learning_rate = 0.001epochs = 300n = len(X)# Training iterationfor epoch in range(epochs):y_pred = predict(X, theta0, theta1)## Here the loss function is: 1/n*sum(y-y_pred)^2 a.k.a mean squared error (mse)# Derivative of loss w.r.t. theta0theta0_d = -(2/n) * sum(Y-y_pred)# Derivative of loss w.r.t. theta1theta1_d = -(2/n) * sum(X*(Y-y_pred))theta0 = theta0 - learning_rate * theta0_dtheta1 = theta1 - learning_rate * theta1_dreturn theta0, theta1# Training the modeltheta0, theta1 = linear_regression(X_train, y_train)# Making predictionsy_pred = predict(X_test, theta0, theta1)# Evaluating the modelprint(list(y_test))print(y_pred)