Particle Swarm Optimization
Learn the particle swarm optimization algorithm and apply it to solve optimization problems.
We'll cover the following...
While genetic algorithms are an analogy of the evolutionary theory and work with operations like mutations and crossovers, particle swarm optimization (PSO) is an analogy of how the dynamic of a population evolves in time to achieve a common goal.
In PSO, many candidate solutions (particles) “move” around to explore the solution space. That means that PSO, as a population algorithm, also analyzes the solution space, but differently than genetic algorithms.
Instead of modifying by mutating and crossing solutions, we’re going to “move” each solution according to a certain criterion in such a way that it’s expected that the solutions converge to the global optimum. Each solution is interpreted to be a particle that’s moving in the solution space and we modify their velocities according to the best solutions found so far.
To be in a better position to understand PSO, we can use another analogy. Suppose we’re in a group that’s exploring the ground to find a gold mine. Initially, we don’t have any clue about where the mine is. After days of exploring, someone in the group finds some gold nuggets. What happens next? The entire group will redirect their exploration toward the point where the nuggets were found. Now suppose that we find other gold nuggets. They’re not as many as the ones found by the first person, but what if this location is much closer to the actual mine than the previous one? In this case, our attention shifts to areas that are not only in the vicinity of our own discovery but also close to where the group found the most nuggets.
Our exploration is influenced by both: the best solution found by the entire population and the best solution found by ourselves. This dynamic governs how the swarm evolves and converges to the optimal solution. Of course, convergence is not assured at all; we need to carefully design the metaheuristic and run experiments to find the best possible solution.
Now it’s time to define this heuristic formally and then implement it in Python and solve some problems.
Defining the algorithm
We’ve said that solutions are particles and that they move according to the best-known solutions, but how do we implement that? First of all, we’re not interested in the particle itself but in its movement, and the value of the target function in the solution represented by the particle.
A solution is a vector but we can view it also as the position of a particle. That particle also has an associated velocity vector. Each particle moves according to its current velocity and that velocity changes according to the best-known solution found by the entire population and the best solution found by the particle.
That said, these are the steps of the general PSO algorithm. We denote the target function as ...