...
/Understanding LSTM Activations and Stabilized Gradients
Understanding LSTM Activations and Stabilized Gradients
Explore the key role of activations in LSTMs, how they influence the network's ability to process, and how they remember information over time.
We'll cover the following...
Activations in LSTM
The activations below:
for and
for emitting correspond to the activation
argument in an LSTM
layer in TensorFlow. By default, it is tanh
. These expressions act as learned features and, therefore, can take any value. With tanh
activation, they are in . Other suitable activations can also be used for them.
On the other hand, the activations for input, output, and forget gates are referred to as the argument recurrent_activation
in TensorFlow. These gates act as scales. Therefore, they are intended to stay in . Their default is, hence, sigmoid
. For most purposes, it’s essential to keep recurrent_activation
as sigmoid
.
Note: The
recurrent_activation
should besigmoid
. The default activation istanh
but can be set to other activations such asrelu
.
Parameters
Suppose an LSTM layer has cells, that is, the layer size equal to . The cell mechanism is for one cell in an LSTM layer. The parameters involved in the cell are, , where is and .
A cell intakes the prior output of all the other sibling cells in the layer. Given the layer size is , the prior output from the layer cells will be an -vector and, therefore, the are also of the same length .
The weight for the input time-step is a -vector given there are features, that is, . Lastly, the bias on a cell is a scalar.
Combining them for each of the total number of parameters in a cell is .
In the LSTM layer, there are cells. Therefore, the total number of parameters in a layer are:
...