...

/

Advanced Autoencoders for Sequence and Image Data

Advanced Autoencoders for Sequence and Image Data

Discover how to leverage LSTMs and convolutional autoencoders for complex data structures like sequences and images.

A dense autoencoder encodes flat data. Sometimes it’s required to encode them as sequences or time series. Sequence constructs such as LSTMs and convolution can be used to build autoencoders for them. However, this autoencoder construction has fewer differences than a dense autoencoder.

LSTM autoencoders

An LSTM autoencoder can be used for sequence reconstruction, sequence-to-sequence translation, or feature extraction for other tasks. A simple sparse autoencoder is constructed in the code below, which can be modified based on the objective.

# LSTM Autoencoder Model 

# Encoder

inputs = Input(shape=(TIMESTEPS , N_FEATURES),
               name='encoder-input') 

x = LSTM(units=N_FEATURES ,
         activation='tanh', 
         return_sequences=True, 
         name='encoder-lstm')(inputs)

# Shape info needed to build Decoder Model

e_shape = tf.keras.backend.int_shape(x)
latent_dim = e_shape [1] * e_shape [2]

# Generate the latent vector
x = Flatten(name='flatten')(x) 
latent = Dense(units=latent_dim ,
               activation='linear',
               activity_regularizer= tf.keras.regularizers.L1(l1=0.01),
               name='encoded-vector')(x) 

# Instantiate Encoder Model
encoder = Model(inputs=inputs , 
                outputs=latent , 
                name='encoder')
encoder.summary()

# Decoder
latent_inputs = Input(shape=(latent_dim ,), 
                name='decoder_input')

x = Reshape((e_shape[1], e_shape[2]), 
            name='reshape')(latent_inputs)

x = LSTM(units=N_FEATURES , 
         activation='tanh',
         return_sequences=True, 
         name='decoder-lstm')(x)

output = Dense(units=N_FEATURES ,
               activation='linear', 
               name='decoded-sequences')(x)

# Instantiate Decoder Model
decoder = Model(inputs=latent_inputs , 
                outputs=output ,
                name='decoder') 
decoder.summary()

# Instantiate Autoencoder Model using Input and Output
autoencoder = Model(inputs=inputs ,
                    outputs=decoder(inputs=encoder( inputs)),
                    name='autoencoder') 
autoencoder.summary()
Implementing a sparse LSTM autoencoder

In this model,

  • An overcomplete autoencoder is modeled.

  • The model is split into two modules:

...