# Linear Regression

### 1. Regression Model

In statistics, linear regression is a linear approach to modelling the relationship between a dependent variable and one or more independent variables. Let $$x\_1$$ be the independent variable and $$y$$ be the dependent variable. We will define a linear relationship between these two variables as follows:

$$
y = \theta\_0+\theta\_1 x\_1
$$

### 2. Define Loss Function

We will use the Mean Squared Error function.

$$
L = \frac{1}{n}\sum\_{i=1}^n (y\_{true}-y\_{predicted})^2
$$

### 3. Utilize the Gradient Descent Algorithm

You might know that the partial derivative of a function at its minimum value is equal to 0. So gradient descent basically uses this concept to estimate the parameters or weights of our model by minimizing the loss function.

1. Initialize the weights, $$\theta\_0 = 0$$and $$\theta\_1 =0$$&#x20;
2. Calculate the partial derivatives w\.r.t. to $$\theta\_0$$and $$\theta\_1$$ \
   $$d\_{\theta\_0} = -\frac{2}{n} \sum\_{i=1}^n(y\_i - \bar{y\_i}) \   d\_{\theta\_1} = -\frac{2}{n} \sum\_{i=1}^n(y\_i - \bar{y\_i}) \times x\_i$$&#x20;
3. Update the weights\
   $$\theta\_0 = \theta\_0 - l \times d\_{\theta\_0}  \ \theta\_1 = \theta\_1 - l \times d\_{\theta\_1}$$&#x20;

### Python Implementation

```python
# Importing libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

# Preparing the dataset
data = pd.DataFrame({'feature' : [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 'label' : [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30]})
# Divide the data to training set and test set
X_train, X_test, y_train, y_test = train_test_split(data['feature'], data['label'], test_size=0.30)

# Method to make predictions
def predict(X, theta0, theta1):
    # Here the predict function is: theta0+theta1*x
    return np.array([(theta0 + theta1*x) for x in X])

def linear_regression(X, Y):
    # Initializing variables
    theta0 = 0
    theta1 = 0
    learning_rate = 0.001
    epochs = 300
    n = len(X)

    # Training iteration
    for epoch in range(epochs):
        y_pred = predict(X, theta0, theta1)

        ## Here the loss function is: 1/n*sum(y-y_pred)^2 a.k.a mean squared error (mse)
        # Derivative of loss w.r.t. theta0
        theta0_d = -(2/n) * sum(Y-y_pred)
        # Derivative of loss w.r.t. theta1
        theta1_d = -(2/n) * sum(X*(Y-y_pred))

        theta0 = theta0 - learning_rate * theta0_d
        theta1 = theta1 - learning_rate * theta1_d   

    return theta0, theta1

# Training the model
theta0, theta1 = linear_regression(X_train, y_train)   

# Making predictions
y_pred = predict(X_test, theta0, theta1)

# Evaluating the model
print(list(y_test))
print(y_pred)
```

{% embed url="<https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ai.nuhil.net/machine-learning/linear-regression.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
