Cornell Theory Center


Presentation:
MPI Collective Communication I

10/98

This is the presentation layer of a two-part module. For an explanation of the layers and how to navigate within and between them, return to the top page of this module.


Table of Contents

  1. Introduction
  2. MPI Collective Communication Routines
    2.1 Characteristics
    2.2 Barrier Synchronization Routine
    2.3 Data Movement Routines
    2.3.1 Broadcast
    2.3.2 Gather and Scatter
    2.3.3 Gatherv and Scatterv
    2.3.4 Allgather
    2.3.5 Alltoall
    2.4 Global Computation Routines
    2.4.1 Reduce
  3. Performance Issues
    3.1 Example: Broadcast to 8 Processors
    3.2 Example: Scatter to 8 Processors

References Lab Exercises Evaluation

[Table of Contents] [Section 1] [Section 2] [Section 3] [More Detail]


1. Introduction

Collective Communication:

[Table of Contents] [Section 1] [Section 2] [Section 3] [More Detail]


2. MPI Collective Communication Routines


2.1 Characteristics


2.2 Barrier Synchronization Routine

Purpose:
Fortran Binding:
MPI_BARRIER (comm, ierr)
C Binding:
MPI_Barrier (MPI_comm comm)


2.3 Data Movement Routines


2.3.1 Broadcast

Purpose:
Fortran Binding:
MPI_BCAST (buffer, count, datatype, root, comm, ierr)
C Binding:
int MPI_Bcast (void* buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)


Broadcast Effect


Example:

character(12) message
integer rank,root
data root/0/

call MPI_COMM_RANK(rank, MPI_COMM_WORLD, ierr)
if (rank .eq. root) then
message = 'Hello, world'

call MPI_BCAST(message, 12, MPI_CHARACTER, root,
& MPI_COMM_WORLD, ierr)

print*, 'node', rank, ':', message


2.3.2 Gather and Scatter

Gather Purpose:
Fortran Binding:
MPI_GATHER (sbuf, scount, stype, rbuf, rcount, rtype, root, comm, ierr)
C Binding:
int MPI_Gather (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm)


Scatter Purpose:

Fortran Binding:
MPI_SCATTER (sbuf, scount, stype, rbuf, rcount, rtype, root, comm, ierr)
C Binding:
int MPI_Scatter (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm)


Gather & Scatter Effect


Example:

Sample Code

DIMENSION A(25, 100), b(100), cpart(25), ctotal(100)
INTEGER root
DATA root/0/
DO I=1, 25
cpart(I) = 0.
DO K=1, 100
    cpart(I) = cpart(I) + A(I, K)*b(K)
END DO
END DO
call MPI_GATHER(cpart, 25, MPI_REAL, ctotal, 25,
MPI_REAL, root, MPI_COMM_WORLD, ierr)



2.3.3 Gatherv and Scatterv

Gatherv Purpose:

Fortran Binding:
MPI_GATHERV (sbuf, scount, stype, rbuf, rcount, displs, rtype, root, comm, ierr)
C Binding:
int MPI_Gatherv (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int* rcount, int* displs, MPI_Datatype rtype, int root, MPI_Comm comm)


Scatterv Purpose:

Fortran Binding:
MPI_SCATTERV (sbuf, scount, displs, stype, rbuf, rcount, rtype, root, comm, ierr)
C Binding:
int MPI_Scatterv (void* sbuf, int* scount, int* displs, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm)


2.3.4 Allgather

Purpose:
Fortran Binding:
MPI_ALLGATHER (sbuf, scount, stype, rbuf, rcount, rtype, comm, ierr)
C Binding:
int MPI_Allgather (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, MPI_Comm comm)


Allgather Effect


Example:

Sample Code
DIMENSION A(25, 100), b(100), cpart(25), ctotal(100)

DO I=1, 25
cpart(I) = 0.
DO K=1, 100
    cpart(I) = cpart(I) + A(I, K) * b(K)
END DO
END DO
call MPI_ALLGATHER(cpart, 25, MPI_REAL, ctotal, 25,
MPI_REAL, MPI_COMM_WORLD, ierr)


2.3.5 Alltoall

Purpose:
Fortran Binding:
MPI_ALLTOALL (sbuf, scount, stype, rbuf, rcount, rtype, comm, ierr)
C Binding:
int MPI_Alltoall (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, MPI_Comm comm)


Alltoall Effect


2.4 Global Computation Routines


2.4.1 Reduce

Purpose:
Fortran Binding:
MPI_REDUCE (sbuf, rbuf, count, stype, op, root, comm, ierr)
C Binding:
int MPI_Reduce ( void* sbuf, void* rbuf, int count, MPI_Datatype stype, MPI_Op op, int root, MPI_Comm comm)


MPI Predefined Reduce Operations
Name MeaningDatatypes
MPI_MAX maximum value Each of these
operations
makes sense
for only certain
datatypes.

The MPI Standard
lists the types
accepted for
each operation.

MPI_MIN minimum value
MPI_SUM sum
MPI_PROD product
MPI_LAND logical and
MPI_BAND bit-wise and
MPI_LOR logical or
MPI_BOR bit-wise or
MPI_LXOR logical xor
MPI_BXOR bit-wise xor
MPI_MAXLOC max value and location
MPI_MINLOC min value and location


Reduce effect


Example:

Sample Code

INTEGER maxht, globmx
.
. (calculations which determine maximum height)
.
call MPI_REDUCE (maxht, globmx, 1, MPI_INTEGER,
MPI_MAX, 0, MPI_COMM_WORLD, ierr)
IF (taskid.eq.0) then
.
. (Write output)
.
END IF


User-defined Operations

Reduce Variations

[Table of Contents] [Section 1] [Section 2] [Section 3] [More Detail]


3. Performance Issues


3.1 Example: Broadcast to 8 Processors

Simple Approach


Better approach


3.2 Example: Scatter to 8 Processors

Simple Approach


Better Approach

[Table of Contents] [Section 1] [Section 2] [Section 3] [More Detail]


References

Book

Using MPI: Portable Parallel Programming with the Message-Passing Interface, by William Gropp, Ewing Lusk, and Anthony Skjellum. Published 10/21/94 by MIT Press, 328 pages.

World Wide Web


[Exercise] Lab exercises for MPI Collective Communication I

[Evaluation] Please complete this short evaluation form. Thank you!


[CTC Home Page] [Search] [Education]
[Copyright Statement] [Feedback] [Resources]
URL http://www.tc.cornell.edu/Edu/Talks/MPI/Collective.I/less.html