Neural Network-Related Operations

Learn about useful neural network-related operations.

Now, let’s look at several useful neural network-related operations. The operations we’ll discuss here range from simple element-wise transformations (that is, activations) to computing partial derivatives of a function with respect to a set of parameters. We will also implement a simple neural network as an exercise.

Nonlinear activations used by neural networks

Nonlinear activations enable neural networks to perform well at numerous tasks. Typically, there’s a nonlinear activation transformation (that is, activation layer) after each layer output in a neural network (except for the last layer). A nonlinear transformation helps a neural network to learn various nonlinear patterns that are present in data. This is very useful for complex real-world problems, where data often has more complex nonlinear patterns. If not for the nonlinear activations between layers, a deep neural network would be a bunch of linear layers stacked on top of each other. Also, a set of linear layers can essentially be compressed into a single bigger linear layer.

In conclusion, if not for the nonlinear activations, we couldn’t create a neural network with more than one layer.

Get hands-on with 1400+ tech skills courses.