Building Sparse Autoencoders

Learn to build sparse autoencoders for improved feature extraction and classification in anomaly detection tasks.

Sparse autoencoder construction

This lesson shows that sparse autoencoders can learn useful features and help improve classification tasks.

A sparse autoencoder described earlier is constructed here. The model is overcomplete with the encoding dimension equal to the input dimension.

The sparsity regularization is imposed on the encodings by setting activity_regularizer=tf.keras.regularizers.L1(l1=0.01) in the model construction in the code below.

# Sparse Autoencoder for Rare Event Detection
input_dim = df_train_0_x_rescaled.shape[1] 

encoder = Dense(units=input_dim ,
                activation="relu", 
                input_shape=(input_dim ,), 
                use_bias = True , 
                kernel_constraint=UnitNorm(axis=0), 
                activity_regularizer=tf.keras.regularizers.L1( l1 =0.01) ,
                name='encoder')

decoder = Dense(units=input_dim , 
                activation="linear",
                use_bias = True , 
                kernel_constraint=UnitNorm(axis=1), 
                name='decoder')

sparse_autoencoder = Sequential() 
sparse_autoencoder.add(encoder)

sparse_autoencoder.add(decoder)

sparse_autoencoder.summary() 
sparse_autoencoder.compile(
        metrics=['accuracy'],
        loss='mean_squared_error', 
        optimizer='adam')

history = sparse_autoencoder.fit( x=df_train_0_x_rescaled , 
                                  y=df_train_0_x_rescaled ,
                                  batch_size=128,
                                  epochs=100, 
                                  validation_data=(df_valid_0_x_rescaled ,
                                                   df_valid_0_x_rescaled), 
                                  verbose=1).history
Implementing a sparse autoencoder to derive predictive features

This ...