Logistic Regression
1. Logistic Model
Consider a model with features . Let the binary output be denoted by , that can take the values 0 or 1. Let be the probability of , we can denote it as . The mathematical relationship between these variables can be denoted as:
Here the term is known as the odds and denotes the likelihood of the event taking place. Thus is known as the log odds and is simply used to map the probability that lies between 0 and 1 to a range between (−∞, +∞). The terms are parameters (or weights) that we will estimate during training.
It is actually Sigmoid!
Now we will be using the above equation to make our predictions. Before that we will train our model to obtain the values of our parameters that result in least error.
2. Define the Loss Function
A L2 Loss function such as Least Squared Error will do the job.
3. Utilize the Gradient Descent Algorithm
You might know that the partial derivative of a function at its minimum value is equal to 0. So gradient descent basically uses this concept to estimate the parameters or weights of our model by minimizing the loss function.
Initialize the weights, and .
Calculate the partial derivative with respect to and
Update the weights - values of and
Python Implementation
Last updated