Representational Learning
Learn about representational learning, signal decomposition, denoising autoencoders, and semantic compression in detail.
We'll cover the following
The feedforward neural networks implement transformations or mapping functions from an input space to a latent space and from there on to an output space. The latent space is spanned by the neurons in between the input nodes and the output nodes, which are sometimes called the hidden neurons. We can always observe the activity of the nodes in our programs so these are not really hidden. All the weights are learned from the data so that the transformations that are implemented by the neural network are learned from examples.
However, we can use architecture to guide these transformations. The latent representations should be learned so that the final classification in the final layer is much easier than it would be from the raw sensory space. Also, the network and hence the representation it represents should make generalizations to previously unseen examples easy and robust. It is useful to pause for a while here and discuss representations.
Get hands-on with 1400+ tech skills courses.