Search⌘ K
AI Features

Invasion of the Sigmoids

Explore how sigmoid functions adapt linear regression for binary classification by constraining outputs between 0 and 1 using logistic regression. Understand the implementation of the sigmoid in Python, the concept of forward propagation, and the importance of choosing log loss to optimize gradient descent. This lesson equips you with foundational knowledge to build and train binary classifiers effectively.

Overview of sigmoids

Even though linear regression is not a natural fit for binary classification, that does not mean that we have to scrap our linear regression code and start from scratch. Instead, we can adapt our existing algorithm to this new problem using a technique that statisticians call logistic regression.

Let’s start by looking back at y^\hat{y}, the weighted sum of the inputs that we introduced in the lesson: Adding More Dimensions.

y^=x1w1+x2w2+x3w3+...\hat{y} = x_1 * w_1 + x_2 * w_2 + x_3 * w_3 + ...

In linear regression, y^\hat{y} could take any value. Binary classification, however, imposes a tight constraint that y^\hat{y} must not drop below 00 nor raise above 11. Here’s an idea: maybe we can find a function that wraps around the weighted sum and constrains it to the range ...