Comparing LSTMs to LSTMs with Peephole Connections and GRUs

Now, we’ll compare LSTMs to LSTMs with peepholes and GRUs in the text generation task. This will help us to compare how well these different models perform in terms of perplexity. Remember that we prefer perplexity over accuracy because accuracy assumes there’s only one correct token given a previous input sequence. However, as we have learned, language is complex, and there can be many different correct ways to generate text given previous inputs.

Standard LSTM

First, we’ll reiterate the components of a standard LSTM. We won’t repeat the code for standard LSTMs because it’s identical to what we discussed previously. Finally, we’ll see some text generated by an LSTM.

Here, we’ll revisit what a standard LSTM looks like. As we already mentioned, an LSTM consists of the following:

  • Input gate: This decides how much of the current input is written to the cell state.

  • Forget gate: This decides how much of the previous cell state is written to the current cell state.

  • Output gate: This decides how much information from the cell state is exposed to output into the external hidden state.

In the figure below, we illustrate how each of these gates, inputs, cell states, and the external hidden states are connected:

Get hands-on with 1400+ tech skills courses.