What is MPI_Start function?

MPI library

MPI (Message Passing Interface) is a library that allows you to write parallel programs in C or Fortran77. The library uses commonly available operating system services to create parallel processes and exchange information among these processes.

Parallel processing

MPI_Start

This method starts a communication channel with a persistent request handle. The request handle is activated when the MPI_Start function returns.

Syntax

int MPI_Start(MPI_Request *request)

Parameters

  • request is a communication request handle.

Return value

The function returns an error if it is unsuccessful. By default, the error aborts the MPI job.

In case of success, it returns MPI_SUCCESS- the value returned upon successful termination of any MPI routine.

Example

The following example illustrates how we can use MPI_Start with two processes where one is a sender and the other is a receiver:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc, char* argv[])
{
MPI_Init(&argc, &argv);
// Get the number of processes and check only 2 processes are used
int size;
MPI_Comm_size(MPI_COMM_WORLD, &size);
if(size != 2)
{
printf("This application is meant to be run with 2 processes.\n");
MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
}
// Get my rank and do the corresponding job
enum role_ranks { SENDER, RECEIVER };
int my_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
switch(my_rank)
{
case SENDER:
{
int data_sent;
MPI_Request request;
// Prepare the send request handle
MPI_Send_init(&data_sent, 1, MPI_INT, RECEIVER, 0, MPI_COMM_WORLD, &request);
data_sent = 12345;
// Launch the send
MPI_Start(&request);
printf("MPI process %d sends value %d.\n", my_rank, buffer_sent);
break;
}
case RECEIVER:
{
int received;
MPI_Recv(&received, 1, MPI_INT, SENDER, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("MPI process %d received value %d.\n", my_rank, received);
break;
}
}
MPI_Finalize();
return EXIT_SUCCESS;
}

In the program above, we perform the following steps:

  • Include the mpi.h library to use MPI functions.
  • Compute the number of processes using the MPI_Comm_size function and check that the application is run with 2 processes, i.e., a sender and a receiver (line 9).
  • For the sender, we create a request handle (line 24), prepare the send using the MPI_Send_Init function (line 26), and launch the send using the MPI_Start function (line 29).
  • The receiver process receives the data sent by the sender in line 36 using the MPI_Recv function.

Image credits:

Icons made by Pixel perfect from www.flaticon.com

Free Resources

Copyright ©2024 Educative, Inc. All rights reserved