Matrix Multiplication

Learn the purpose of using matrices in neural network.

Why do we use matrices?

If we manually do the calculations for a two-layer network with just two nodes in each layer, that would be enough work. But imagine doing the same for a network with five layers and hundreds of nodes in each layer. Just writing out all the necessary calculations would be a huge task. These can include combinations of combining signals, multiplied by the right weights, and applying the sigmoid activation function for each node and each layer. Clearly, there are too many manual calculations.

How can matrices help? Well, they help us in two ways. First, they allow us to compress writing all those calculations into a very simple short form. The second benefit is that many computer programming languages understand working with matrices, and because the real work is repetitive, they can recognize that and do it quickly and efficiently.

In short, matrices allow us to express the work we need to do concisely and easily, and computers can get the calculations done quickly and efficiently.

What is a matrix?

Now that we know why we’re going to look at matrices, let’s demystify them. A matrix is just a table—a rectangular grid of numbers. That’s it. There’s nothing more complex about a matrix than that. If we’ve used spreadsheets, we’re already comfortable with working with numbers arranged in a grid. Some call it a table. We can call it a matrix too. The following illustration shows a table of numbers.

A B C
3 32 5
5 74 2
8 11 8
2 75 3

Get hands-on with 1400+ tech skills courses.