Message passing interface

The Message Passing Interface (MPI) is a standardized library specification for message passing (peer 2 peer) between parallel processes or nodes in a distributed computing environment. It provides a portable and efficient way to develop parallel programs that can run on a wide range of architectures and platforms.

MPI allows developers to explicitly control the communication between processes by sending and receiving messages. It provides a set of functions and data types that enable communication and synchronization among processes, making it easier to write parallel programs.

Here’s a simple example in C that demonstrates the use of MPI to perform a collective operation (summation) on a distributed array:

#include <stdio.h>
#include <mpi.h>

#define ARRAY_SIZE 10

int main(int argc, char *argv[]) {
    int rank, size;
    int localArray[ARRAY_SIZE];
    int globalSum = 0;
    int i;

    // Initialize MPI
    MPI_Init(&argc, &argv);

    // Get the rank of the current process
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    // Get the total number of processes
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    // Distribute the array among processes
    for (i = 0; i < ARRAY_SIZE; i++) {
        localArray[i] = rank * ARRAY_SIZE + i;
    }

    // Compute the local sum
    int localSum = 0;
    for (i = 0; i < ARRAY_SIZE; i++) {
        localSum += localArray[i];
    }

    // Perform global sum using MPI_Reduce
    MPI_Reduce(&localSum, &globalSum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

    // Print the result from the root process
    if (rank == 0) {
        printf("Global sum: %d\n", globalSum);
    }

    // Finalize MPI
    MPI_Finalize();

    return 0;
}

In this example, each process initializes a local array with consecutive values based on its rank. Each process then computes the sum of its local array. The MPI_Reduce function is used to perform a global sum operation, where the local sums from all processes are added together to obtain the final result on the root process (rank 0). Finally, the root process prints the global sum.

To compile and run this program, you will need an MPI implementation installed on your system, such as OpenMPI or MPICH. The program can be compiled using the following command:

mpicc mpi_example.c -o mpi_example

And it can be executed using the following command:

mpirun -np 4 ./mpi_example

In this case, we are running the program with 4 processes (-np 4), but you can modify the number of processes as per your system’s capabilities.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *