Search⌘ K
AI Features

Restricted Stateless LSTM Network for Baseline Modeling

Explore how to construct a restricted stateless LSTM network as a baseline model for sequential data prediction. This lesson guides you through input preparation, stacking LSTM layers with specific activations, adding a dense output layer, and compiling and fitting the model, offering hands-on experience with temporal pattern recognition using LSTMs.

It’s always advisable to begin with a baseline model. A restricted stateless LSTM network is taken as a baseline. In such a network, every LSTM layer is stateless, and the final layer has a restricted output, that is:

LSTM(..., stateful=False, return_sequences=False)

We’ll now proceed to build the baseline model, outlining each step in the process.

Input layer

The input layer in LSTM expects three-dimensional inputs. The input shape should be:

(batch size, time-steps, features)\textit{(batch size, time-steps, features)}

A stateless LSTM doesn’t require to specify the batch size explicitly. Therefore, the input shape is defined as follows in the code below.

Python 3.8
model = Sequential ()
model.add(Input(shape=(TIMESTEPS , N_FEATURES),
name='input'))

The above code will ...

The input shape can ...