Set Up a Learning Rate in the Training Classifier

Learn to find out the use of learning rate while training a classifier.

We'll cover the following

Fixing the problem

Unfortunately, we can’t completely rely on the very last training example. This is an important idea in machine learning. To improve the results, we moderate the updates. This means that instead of jumping enthusiastically to each new AA, we take a fraction of the change ΔAΔA. That way, we move in the direction that the training example suggests, but we do so cautiously and keep some of the previous value, which we potentially arrived at through many training iterations. We saw this idea of moderating our refinements before with the simpler miles to kilometers predictor, where we nudged the value parameter cc just a fraction of the actual error.

Get hands-on with 1400+ tech skills courses.