...
/From Logistic Regression to Neural Networks
From Logistic Regression to Neural Networks
Learn the evolution from logistic regression to neural networks.
Neuron
A neuron, in the context of neural networks and artificial intelligence, is a fundamental computational unit that mimics the behavior of biological neurons found in the human brain. Neurons are the building blocks of artificial neural networks, which are used for various machine learning tasks, including image recognition, natural language processing, and more.
Components of a neuron
Let’s discuss the key components and functions of an artificial neuron:
Input: Neurons receive input signals from other neurons.
Weights: Each input is associated with a weight that determines its influence on the neuron’s output. These weights are learnable parameters that are adjusted during the training process to optimize the neuron’s performance.
Summation: The weighted input signals are summed together, often with an additional bias term, to produce a single value. This weighted sum represents the net input to the neuron.
Activation function: The net input is then passed through an activation function. The activation function introduces nonlinearity into the neuron’s computation.
Output: The result of the activation function is the output of the neuron, which can be passed to other neurons in subsequent layers of the neural network.
Here’s the illustration for the components of a neuron:
Neural network
In a neural network, we combine distinct neurons, each with its own set of parameters and an activation function. The figure below exemplifies a typical neural network, comprising an input layer as the first layer, consisting of input labels . Unlike other layers, the input layer doesn’t contain neurons and simply copies the input to the subsequent layers. The last layer represents the output layer, with output labels . All layers situated between the input and output layers are referred to as hidden layers. The hidden layers use labels of the form , where denotes the neuron index in layer . Both the hidden layers and the output layer are computational, meaning they’re comprised of neurons serving as computational units.
A neural network is composed of multiple units, with each unit resembling a logistic regression unit at its core. However, the key distinction lies in the flexibility of these units, which can employ various nonlinear functions on top of the weighted sum . This freedom allows the units to go beyond the limitations of the sigmoid function used in logistic regression.
Forward pass
In a neural network, each neuron takes as input a vector consisting of the outputs from all neurons in the previous layer. Consequently, every neuron produces a real number output. For a layer containing neurons, the output vector of this layer comprises components, which act as the input for all neurons in the subsequent layer . This arrangement ensures that each neuron in layer possesses parameters, maintaining uniformity across all neurons in the same layer. Although each neuron has its own set of parameters, they all share the same number of parameters.
Let represent the parameter vector of the neuron in layer , then the parameter matrix can be defined as . If all neurons in layer employ the same activation function , and denotes the output vector of layer , the relationship between them can be expressed as:
The input vector is typically denoted as , and the output vector of the last layer is denoted as ...