The Importance of Parallelism
Learn about optimizing programs through parallel algorithms, evaluating parallel algorithms, and applying Amdahl's law to parallel programs.
We'll cover the following...
From a programmer’s perspective, it would be very convenient if today’s computer hardware was a 100 GHz single-core CPU rather than a 3 GHz multi-core CPU; we wouldn’t need to care about parallelism. Unfortunately, making single-core CPUs faster and faster has hit a physical limit. So, as the evolution of computer hardware is going in the direction of multi-core CPUs and programmable GPUs, programmers have to use efficient parallel patterns to make the most of the hardware.
Parallel algorithms allow us to optimize our programs by executing multiple individual tasks or subtasks at the exact same time on a multi-core CPU or GPU.
Parallel algorithms
As mentioned in chapter 11, the terms concurrency and parallelism can be a little hard to distinguish from each other. As a reminder, a program is said to run concurrently if it has multiple individual control flows running during overlapping time periods. On the other hand, a parallel program executes multiple tasks or subtasks simultaneously (at the exact same time), which requires hardware with multiple cores. We use parallel algorithms to optimize latency or throughput. It makes no sense to parallelize algorithms if we don’t have hardware that can execute multiple tasks simultaneously to achieve better performance. A few simple formulas will now follow to help you understand what factors need to be considered when evaluating parallel algorithms.
Evaluating parallel algorithms
In this chapter, the speedup is defined as the ratio between a sequential and a parallel version of an algorithms, as follows:
is the time it takes to solve a problem using a sequential algorithm executing at one core, and ...