This is the presentation layer of a two-part module. For an explanation of the layers and how to navigate within and between them, return to the top page of this module.
References
Lab Exercises
Evaluation
Collective Communication:
1. Introduction
2. MPI Collective Communication Routines
Fortran Binding: | |
MPI_BARRIER | (comm, ierr) |
C Binding: | |
MPI_Barrier | (MPI_comm comm) |
Fortran Binding: | |
MPI_BCAST | (buffer, count, datatype, root, comm, ierr) |
C Binding: | |
int MPI_Bcast | (void* buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm) |
Example:
Broadcast Effect
2.3.2 Gather and Scatter
Gather Purpose:
Fortran Binding: | |
MPI_GATHER | (sbuf, scount, stype, rbuf, rcount, rtype, root, comm, ierr) |
C Binding: | |
int MPI_Gather | (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm) |
Fortran Binding: | |
MPI_SCATTER | (sbuf, scount, stype, rbuf, rcount, rtype, root, comm, ierr) |
C Binding: | |
int MPI_Scatter | (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm) |
Example:
b: vector shared by all processes
c: vector updated by each process independently
Gather & Scatter Effect
Sample Code
cpart(I) = cpart(I) + A(I, K)*b(K)
A: matrix distributed by rows
Fortran Binding: | |
MPI_GATHERV | (sbuf, scount, stype, rbuf, rcount, displs, rtype, root, comm, ierr) |
C Binding: | |
int MPI_Gatherv | (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int* rcount, int* displs, MPI_Datatype rtype, int root, MPI_Comm comm) |
Fortran Binding: | |
MPI_SCATTERV | (sbuf, scount, displs, stype, rbuf, rcount, rtype, root, comm, ierr) |
C Binding: | |
int MPI_Scatterv | (void* sbuf, int* scount, int* displs, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, int root, MPI_Comm comm) |
Fortran Binding: | |
MPI_ALLGATHER | (sbuf, scount, stype, rbuf, rcount, rtype, comm, ierr) |
C Binding: | |
int MPI_Allgather | (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, MPI_Comm comm) |
Allgather Effect
Fortran Binding: | |
MPI_ALLTOALL | (sbuf, scount, stype, rbuf, rcount, rtype, comm, ierr) |
C Binding: | |
int MPI_Alltoall | (void* sbuf, int scount, MPI_Datatype stype, void* rbuf, int rcount, MPI_Datatype rtype, MPI_Comm comm) |
Alltoall Effect
Fortran Binding: | |
MPI_REDUCE | (sbuf, rbuf, count, stype, op, root, comm, ierr) |
C Binding: | |
int MPI_Reduce | ( void* sbuf, void* rbuf, int count, MPI_Datatype stype, MPI_Op op, int root, MPI_Comm comm) |
Name | Meaning | Datatypes |
---|---|---|
MPI_MAX | maximum value | Each of these operations makes sense for only certain datatypes. The MPI
Standard |
MPI_MIN | minimum value | |
MPI_SUM | sum | |
MPI_PROD | product | |
MPI_LAND | logical and | |
MPI_BAND | bit-wise and | |
MPI_LOR | logical or | |
MPI_BOR | bit-wise or | |
MPI_LXOR | logical xor | |
MPI_BXOR | bit-wise xor | |
MPI_MAXLOC | max value and location | |
MPI_MINLOC | min value and location |
Simple Approach
Amount of data transferred: (N-1)*p
Better approach
Amount of data transferred: (N-1)*p
Simple Approach
Amount of data transferred: (N-1)*p
Better Approach
Amount of data transferred: log2(N)*N*p/2
Using MPI:
Portable Parallel Programming with the Message-Passing Interface,
by William Gropp, Ewing Lusk, and Anthony Skjellum.
Published 10/21/94 by MIT Press, 328 pages.
User-defined Operations
Reduce Variations
3. Performance Issues
3.1 Example: Broadcast to 8 Processors
N = number of processors
Number of steps: N-1
p = size of message
Solid line: data transfer
Dotted line: carry-over from previous transfer
N = number of processors
Number of steps: log2N
p = size of message
3.2 Example: Scatter to 8 Processors
N = number of processors
Number of steps: (N-1)*p
p = size of message
Solid line: data transfer
Dotted line: carry-over from previous transfer
N = number of processors
Number of steps: log2N
p = size of message
References
Book
World Wide Web
Lab exercises for MPI Collective Communication I
Please complete this short evaluation form. Thank you!
URL http://www.tc.cornell.edu/Edu/Talks/MPI/Collective.I/less.html