Enhancing Autoencoder Models
Discover how to create well-posed autoencoders with orthonormal weights and sparse feature covariances.
We'll cover the following...
Autoencoders have proven to be useful for unsupervised and semi-supervised learning. Earlier, in the autoencoder family, we presented a variety of ways to model autoencoders. Still, there’s significant room for new development.
This lesson is intended for researchers seeking new development.
Well-posed autoencoders
A mathematically well-posed autoencoder is easier to tune and optimize. Its structure can be defined from its relationship with principal component analysis (PCA).
A linearly activated autoencoder approximates PCA. Conversely, autoencoders are a nonlinear extension of PCA. In other words, an autoencoder extends PCA to a nonlinear space. Therefore, an autoencoder should ideally have the properties of PCA. These properties are:
-
Orthonormal weights: It should be defined as follows for encoder weights:
...