Message Passing Interface (MPI)

Learn how the purpose of MPI is to help divide and manage computation among a group of processor nodes (e.g. cores or CPUs).

What is MPI?

The Message Passing Interface (MPI) is a standard definition of core syntax and semantics of library routines that can be used to implement parallel programming in C (and in other languages as well). There are several implementations of MPI such as Open MPI and MPICH2.

In the context of this tutorial, we can think of MPI, in terms of its complexity, scope and control, as sitting in between programming with pthreads, and using a high-level API such as OpenMP.

The MPI interface allows us to manage allocation, communication, and synchronization of a set of processes that are mapped onto multiple nodes, where each node can be a core within a single CPU, or CPUs within a single machine, or even across multiple machines (as long as they are networked together).

One context where MPI shines in particular is the ability to easily take advantage not just of multiple cores on a single machine, but to run programs on clusters of several machines. Even if you don’t have a dedicated cluster, you could still write a program using MPI that could run our program in parallel, across any collection of computers, as long as they are networked together. Just make sure to ask permission before you load up your lab-mate’s computer’s CPU(s) with your computational tasks!

An example using MPI

The basic design pattern of an MPI-based program is that the same code is sent to all nodes for execution. Here’s a basic MPI-based program that simply writes a message to the screen indicating which node is running.

Create a free account to access the full course.

By signing up, you agree to Educative's Terms of Service and Privacy Policy