Bias in neural networks is a value that shifts the line of the perceptron's equation to make the perceptron more flexible under real-world scenarios.
To understand the phenomena, we first need to know about the internal workings of a perceptron.
The following points describe the training phase of a neural network:
The data set is divided into two parts: training data and testing data.
The training data input is associated with weights and fed into the perceptron.
The summation equation is applied:
This can also be written as:
4. The activation function is applied:
Note: To learn more about activation functions, go here.
5. The value
If the prediction is correct, the next set of inputs is fed into the perceptron. Otherwise, weights are updated according to the following equation:
Here,
We repeat steps 2–6 until there is no weight change after traversing the whole training data set.
The testing data is then used to compare the perceptron's predicted value and the input's original label. The accuracy of the perceptron can be determined using this comparison.
A typical perceptron equation can be shown as:
Another representation of the perceptron's equation is the equation of a line:
From the equations above, the element of linearity is visible, and the line of perceptron will always pass through the origin. This decreases the accuracy of the classification of data.
Each line in the graph above represents a perceptron equation, while "class" denotes which label the data belongs to in the data set. It can be seen that classification for both the lines gives errors. Bias is introduced in the equation to increase the accuracy of the perceptron.
Bias acts as a y-intercept in the equation of a line, making the classifier (perceptron) much more flexible and resulting in a much-increased accuracy.
The equation with bias can be shown as:
Here,
The equation can also be written as:
Where bias acts like the y-intercept
As seen above, both lines give correct and precise results. Bias introduces flexibility, and now the perceptron can sketch its line with much versatility, improving the accuracy.
It is seen that classification, when introduced with bias, results in a far better outcome when the two graphs are compared against each other.
Free Resources