...

/

Set Up a Learning Rate in the Training Classifier

Set Up a Learning Rate in the Training Classifier

Learn to find out the use of learning rate while training a classifier.

We'll cover the following...

Fixing the problem

Unfortunately, we can’t completely rely on the very last training example. This is an important idea in machine learning. To improve the results, we moderate the updates. This means that instead of jumping enthusiastically to each new AA, we take a fraction of the change ΔAΔA. That way, we move in the direction that the training example suggests, but we do so cautiously and keep some of the previous value, which we potentially arrived at through many training iterations. We saw this idea of moderating our refinements before with the simpler miles to kilometers predictor, where we nudged the value parameter cc just a fraction of the actual error.

Press + to interact
Refining the predicting line
Refining the predicting line

Moderation has another very powerful and useful side effect. When the training data itself can’t be trusted to be perfectly true and contains errors or noise, both of which are normal in real-world measurements, using moderation can dampen the impact of those errors or noise. It smooths them out. Let’s rerun our calculation, but this time we’ll add a moderation into the update formula:

ΔA=L(E/x)ΔA = L (E / x)

The moderating factor is often called a learning rate, and we’ve called it LL. Let’s pick L=0.5L = 0.5 as a reasonable fraction just to get started. It means we only update half as much as we would have done without moderation.

Running through it again, we have an initial A=0.25 ...