...

/

Diverse Autoencoders and Their Applications

Diverse Autoencoders and Their Applications

Explore various autoencoders, from undercomplete to sparse, and their unique applications in data processing and feature learning.

There are several types of autoencoders. The table below summarizes the properties of the most common autoencoders. The rest of this lesson briefly describes them along with their applications.

Press + to interact
Types of autoencoders and their properties
Types of autoencoders and their properties

Undercomplete autoencoders

An undercomplete autoencoder has a smaller encoding dimension than the input. A simple example of such an autoencoder is

X×pf(X×pWp×k(e))encoderZ×kg(Z×kWk×p(d))decoderX^×p.X_{·×p} → \underbrace{f(X_{·×p}W^{(e)}_{p×k})}_{encoder} → Z_{·×k} → \underbrace{g(Z_{·×k}W^{(d)}_{k×p})}_{decoder} → \hat{X}_{·×p}.

Here, the input XX and the encoding ZZ are pp- and kk-dimensional, respectively, and k<pk < p.

In learning a smaller representation, an undercomplete autoencoder gathers the most salient features of the data. The learning process is simply minimizing a loss function,

L(x,g(f(xW(e))W(d)))L(x,g(f(xW^{(e)})W^{(d)}))

where LL is a loss function, for example, mean squared error, that penalizes dissimilarity between xx and x^=g(f(xW(e))W(d))\hat{x} = g(f(xW^{(e)})W^{(d)}). Undercomplete autoencoders are more common. Perhaps because it has roots in PCAPrincipal component analysis. A linearly activated (single or multilayer) undercomplete autoencoder reduces to,

X×pX×pWp×k(e)Z×kZ×kWk×p(d)X^×p,X_{·×p} → X_{·×p}W^{(e)}_{p×k} → Z_{·×k} → Z_{·×k}W^{(d)}_{k×p} → \hat{X}_{·×p},

which is the same as PCA if W(e ...