Review of the Deep Network
Compare and visualize the loss of neural network models for training versus validation sets.
We'll cover the following
The general behavior of neural networks
Let’s get back to our deep network and the Echidna dataset. We have learned three important concepts so far:
- Powerful neural networks tend to overfit.
- Simple neural networks tend to underfit.
- We should strike a balance between the two.
A general strategy to strike that balance is to start with an overfitting model function that tracks tiny fluctuations in the data and progressively makes it smoother until we hit a good middle ground. That idea of smoothing out the model function is called regularization and it is the subject of this lesson.
In the previous chapter, we took the first step of the process: we created a deep neural network that overfits the data at hand. Let’s take a closer look at that network’s model, and afterward, we’ll see how to make it smoother.
To gain more insight into overfitting, a few changes are made to our deep neural network from the previous chapter. Here’s the updated training code:
Get hands-on with 1200+ tech skills courses.