Summary: Applications of LSTMs—Generating Text
Review what we've learned in this lesson.
We'll cover the following
In this chapter, we looked at the implementations of the LSTM algorithm and other various important aspects to improve LSTMs beyond standard performance. As an exercise, we trained our LSTM on the text of stories by the Grimm brothers and asked the LSTM to output a fresh new story. We discussed how to implement an LSTM model with code examples extracted from exercises.
LSTMs with peepholes and GRUs
Next, we had a technical discussion about how to implement LSTMs with peepholes and GRUs. Then, we did a performance comparison between a standard LSTM and its variants. We saw that the GRUs performed the best compared to LSTMs with peepholes and LSTMs.
Improving LSTMs with beam search and word embeddings
Then, we discussed some of the various improvements possible for enhancing the quality of outputs generated by an LSTM. The first improvement was beam search. We looked at an implementation of beam search and covered how to implement it step by step. Then we looked at how we can use word embeddings to teach our LSTM to output better text.
Conclusion
In conclusion, LSTMs are very powerful machine learning models that can capture both long-term and short-term dependencies.
Moreover, beam search helps to produce more realistic-looking textual phrases compared to predicting one at a time.
Get hands-on with 1400+ tech skills courses.