Search⌘ K

Calculate the Inputs for Internal Layers

Explore how to compute inputs for internal layers in neural networks by using matrix multiplication. Understand how weights and inputs combine to form moderated signals and how activation functions like sigmoid transform these signals for the next layer. This lesson guides you through concise calculations that simplify neural network implementation and improve computational efficiency.

We'll cover the following...

Inputs for internal layers

We may come across a kind of matrix multiplication called a dot product or an inner product. There are actually different kinds of multiplications possible for matrices, such as a cross product, but the dot product is the one we want here. What happens if we replace the letters with words that are more meaningful to our neural networks? The second matrix is a 2×12 \times 1 matrix, but the multiplication approach is the same:

[w1,1w2,1w1,2w2,2][input1input2]=[(input1w1,1)+(input2w2,1)(input1w1,2)+(input2w2,2)]\begin{bmatrix} w_{1,1} & w_{2,1} \\ w_{1,2} & w_{2,2} \end{bmatrix} \begin{bmatrix} \text{input}_1\\ \text{input}_2 \end{bmatrix} = \begin{bmatrix} (\text{input}_1 \cdot w_{1,1}) + (\text{input}_2 \cdot w_{2,1}) \\ (\text{input}_1 \cdot w_{1,2}) + (\text{input}_2 \cdot w_{2,2}) \end{bmatrix} ...