Classification in Action

Interact with sample code to explore gradient descent in action.

We'll cover the following...

Final binary classifier

Here’s our final binary classification code:

  • Load the pizza data file
  • Learn from it
  • Come up with a bunch of classifications
Press + to interact
main.py
police.txt
# A binary classifier.
import numpy as np
# Applying Logistic Regression
def sigmoid(z):
return 1 / (1 + np.exp(-z))
# Basically doing prediction but named forward as its
# performing Forward-Propagation
def forward(X, w):
weighted_sum = np.matmul(X, w)
return sigmoid(weighted_sum)
# Calling the predict() function
def classify(X, w):
return np.round(forward(X, w))
# Computing Loss over using logistic regression
def loss(X, Y, w):
y_hat = forward(X, w)
first_term = Y * np.log(y_hat)
second_term = (1 - Y) * np.log(1 - y_hat)
return -np.average(first_term + second_term)
# calculating gradient
def gradient(X, Y, w):
return np.matmul(X.T, (forward(X, w) - Y)) / X.shape[0]
# calling the training function for 10,000 iterations
def train(X, Y, iterations, lr):
w = np.zeros((X.shape[1], 1))
for i in range(iterations):
if (i%2000==0 or i==9999):
print("Iteration %4d => Loss: %.20f" % (i, loss(X, Y, w)))
w -= gradient(X, Y, w) * lr
return w
# Doing inference to test our model
def test(X, Y, w):
total_examples = X.shape[0]
correct_results = np.sum(classify(X, w) == Y)
success_percent = correct_results * 100 / total_examples
print("\nSuccess: %d/%d (%.2f%%)" %
(correct_results, total_examples, success_percent))
# Prepare data
x1, x2, x3, y = np.loadtxt("police.txt", skiprows=1, unpack=True)
X = np.column_stack((np.ones(x1.size), x1, x2, x3))
Y = y.reshape(-1, 1)
w = train(X, Y, iterations=10000, lr=0.001)
# Test it
test(X, Y, w)

As we move from linear regression to classification, most functions in our program have to change. We have a brand-new sigmoid() function. The old predict() split into two separate functions: forward(), which is used during training, and classify(), which is used for classification.

We also change the formula that we use to calculate the loss and its gradient: instead of the mean squared error, we use the log loss. As a result, we have brand-new ...

Access this course and 1400+ top-rated courses and projects.