Types of Neural Networks: Part II
Get familiar with some other types of neural networks.
Recurrent neural networks (RNNs)
There are several operations that feedforward neural networks weren’t able to do very well, including working with sequential data that is rooted in time, operations that need to contextualize multiple inputs (not just the current input), and operations that require memorization from previous inputs. For these reasons, the main draw of RNNs is the internal memory they possess that allows them to perform and remember the kind of robust operations required of conversational AIs such as Apple’s Siri.
RNNs do well with sequential data and place a premium on the context in order to excel at working with time-series data, DNA and genomics data, speech recognition, and speech-to-text functions. In contrast to the preceding CNN example, which works with a feedforward function, RNNs work in loops.
Rather than the motion going from the input layer, through the hidden layers, and ultimately to the output layer, the RNN cycles through a loop back and forth, and this is how it retains its short-term memory. This means the data passes through the input layer and then loops through the hidden layers before it ultimately passes to the output layer. It’s important to note that RNNs only have short-term memory, which is why there was a need for an LSTM network.
In essence, the RNN actually has two inputs:
The first is the initial data that makes its way through the neural network. ...