Assignments and Supplemental Reading Materials

In this lesson, you will complete the assignments and read the supplemental reading materials to gain an in-depth understanding of the topics we discussed in this chapter.

Now that you have built a project and completed the quiz, you are ready to move on to the next step: exploring some supplemental reading materials and completing the provided assignments to gain a better understanding of the topics we discussed.

Supporting reading materials

  1. Please check out this movie, called Sunspring (2016). It is a movie written by an algorithm which turns out to be hilarious and intense. Once you watch it, you will have a sense of what AI can do.

  2. Learning long-term dependencies with gradient descent is difficult by Yoshua Bengio et al. (1994). This paper shows why gradient-based learning algorithms face an increasingly difficult problem as the duration of dependencies increases.

  3. On the difficulty of training recurrent neural networks by Razvan Pascanu et al. (2013). This paper shows an attempt to improve the understanding of the underlying issues by exploring the two problems that RNNs face (vanishing gradient and exploding gradient) from an analytical, a geometric, and a dynamical systems perspective. They proposed a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem.

  4. Long Short-Term Memory by Sepp Hochreiter and Jurgen Schmidhuber (1997). This paper discusses the way in which LSTMs work and solves the problems of a standard RNN architecture.

  5. Understanding LSTM Networks by Christopher Olah (2015). This is one of the greatest articles that you will find on the internet to understand LSTMs. The author tries to make everything as simple as possible. This is a must-read if you are learning about LSTMs.

  6. Understanding LSTM and its diagrams by Shi Yan (2016). This article was written in reference to the above article on LSTMs. It also contains some diagrams that will help you gain a better understanding of LSTMs.

  7. The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy (2015). This is also a great article; it discusses the usefulness and some of the applications of RNNs. It also helps you to understand the mathematics behind RNNs in the form of Python code.

  8. Visualizing and Understanding Recurrent Networks by Andrey Karpathy et al. (2015). This paper discusses some of the errors in LSTMs and compares them to character-level RNNs. We suggest you read this article, as it will help you understand our next project, which is based on character-level RNN models.

  9. LSTM: A Search Space Odyssey by Klaus Greff et al. (2015). This paper presents the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. They summarized the results of 5400 experimental runs (≈15years of CPU time). It is an excellent paper to read about the applications of LSTMs and what hyperparameters you should choose.

Get hands-on with 1400+ tech skills courses.