The Concept of Send and Receive in MPI

Home / The Concept of Send and Receive in MPI

The Concept of Send and Receive in MPI

December 9, 2015 | Article | No Comments

The fundamental concepts of MPI is sending and receiving. Almost every single function in MPI can be implemented with basic send and receive calls.

In this article, we will discuss about how to use MPI’s blocking sending and receiving functions.

Overview

MPI’s send and receive calls operate in the following manner. First, process A decides a message needs to be sent to process B. Process A then packs up all of its necessary data into a buffer for process B. These buffers are often referred to as envelopes since the data is being packed into a single message before transmission (similar to how letters are packed into envelopes before transmission to the post office). After the data is packed into a buffer, the communication device (can be network, or internal connection but commonly a network) is responsible for routing the message to the proper location. The location of the message is defined by the process’s rank.

Even though the message is routed to B, process B still has to acknowledge that it wants to receive A’s data. Once it does this, the data has been transmitted. Process A is acknowledged that the data has been transmitted and may go back to work.

Sometimes there are cases when A might have to send many different types of messages to B. Instead of B having to go through extra measures to differentiate all these messages, MPI allows senders and receivers to also specify message IDs with the message (known as tags). When process B only requests a message with a certain tag number, messages with different tags will be buffered by the network until B is ready for them.

Now, let’s see the prototype for sending and receiving data:

Sending

MPI_Send(void* data, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator)

Receiving

MPI_Recv(void* data, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Status* status)

Although it’s quite long and variate, it is quite easier to remember since almost every MPI call uses similar syntax.

The first argument is the data buffer. Technically, a pointer to actual data. The second and third arguments describe the count and type of elements that reside in the buffer. MPI_Send sends the exact count of elements, and MPI_Recv will receive at most the count of elements. Technically speaking, in sending several data will be sent from memory pointed by data to memory pointed by data+count. The fourth and fifth arguments specify the rank of the sending/receiving process and the tag of the message. The tag can be viewed as a signature. The sixth argument specifies the communicator and the last argument (for MPI_Recv only) provides information about the received message.

Elementary MPI Datatypes

In previous section, a Datatype term is often mentioned. MPI_Send and MPI_Recv funciton will utilize MPI_Datatypes as means to specify the structure of a message at a higher level. For example, if the process wishes to send one integer to another, it would use a count of one and a datatype of MPI_INT. The other elementary MPI datatypes are listed below with their equivalent C datatypes.

[table “6” not found /]

For now, we will only make use of these datatypes in the beginner MPI tutorial. Once we have covered enough basics, you will learn how to create your own MPI datatypes for characterizing more complex types of messages.

The Blocking Properties

Blocking property is a communication properties in which the participant will wait or idle until the communication session is done. In other word, when communication is not completely done the process will not go to next instruction.

Source Code

We will demonstrate how a send and receive concept can be implemented in MPI. We use two process. First process, rank 0, will send a data to second process, rank 1. The data transported is integer which identified as MPI_INT.

Create a file mpi_send_recv.c and write this:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char* argv[]) {
   // Initialize the MPI Environment
   MPI_Init(&argc, &argv);

   // Get the number of process
   int size;
   MPI_Comm_size( MPI_COMM_WORLD, &size );

   // Get the rank of process
   int rank;
   MPI_Comm_rank( MPI_COMM_WORLD, &rank );

   int number;
   if( rank == 0 ) {
      number = 2;
      MPI_Send( &number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD );
   } else {
      MPI_Recv( &number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE );
      printf("Process 1 received number %d from process 0\n", number);
   }

   // Finalize the MPI environment
   MPI_Finalize();

   return 0;
}

The MPI_Comm_rank and MPI_Comm_size are first used to determine the world size along with the rank of the process. Then process zero initialize a number to the value of two and send this value to process one. As you can see the process one is calling MPI_Recv to receive the number. It also prints off the received value.

Since we are sending and receiving exactly one integer, each process requests that one MPI_INT be sent/received. Each process also uses a tag number of zero to identify the message. The processes could have also used the predefined constant MPI_ANY_TAG for the tag number since only one type of message was being transmitted.

Compile & Run

To compile:

mpicc mpi_send_recv.c-o mpi_send_receive

To Run with two processes:

mpiexec -n 2 mpi_send_receive

Results

Running the example program looks like this:

Process 1 received number 2 from process 0

,

About Author

about author

xathrya

A man who is obsessed to low level technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

Social media & sharing icons powered by UltimatelySocial