...

/

Optimizing the Perceptron Output

Optimizing the Perceptron Output

Learn how the perceptron output can be best optimized so that it finds the best possible boundary.

The perceptron trick

To find the best possible boundary, the perceptron algorithm should predict the output, compare it with the actual output, and learn the optimal weights for predicting the best possible fit function that separates the two classes.

📝 ** Question: How does the model learn?**

The model learns for a couple of iterations until it finds the best possible boundary that separates the two classes. An initial-boundary is drawn and then the error is computed. In each iteration, the boundary line moves in the direction so that it minimizes the error. This process continues until the error is below a certain threshold.

The following illustration will help you visualize this:

Quiz

1

What did you observe in the illustration above? Are we moving the point closer to the line or the line closer to the misclassified point?

A)

Line closer to the misclassified point

B)

Misclassified point closer to the line

Question 1 of 30 attempted

Predict the output

Recall the perceptron equation for making a boundary line:

w1x1+w2x2+b=0w_1x_1 + w_2x_2 + b = 0

In case of a step function, the prediction is given by:

yy' = { 1 if w1x1+w2x2+...wnxn+bw_1x_1 + w_2x_2 +... w_nx_n + b >= 0 and 0 otherwise}

In case of a sigmoid function:

yy' = { 1 if σ(w1x1+w2x2+...wnxn+b)\sigma (w_1x_1 + w_2x_2 +... w_nx_n + b) >= tt ...

Perceptron forward propagation operation using the step activation function
Access this course and 1400+ top-rated courses and projects.