Scripted HTML version of Foils prepared 11 November 1996

Foil 8 Collective Communication

From CPS615-Completion of MPI foilset and Application to Jacobi Iteration in 2D Delivered Lectures of CPS615 Basic Simulation Track for Computational Science -- 7 November 96. by Geoffrey C. Fox *
Secs 470.8
MPI_BARRIER(comm) Global Synchronization within a given communicator
MPI_BCAST Global Broadcast
MPI_GATHER Concatenate data from all processors in a communicator into one process
  • MPI_ALLGATHER puts result of concatenation in all processors
MPI_SCATTER takes data from one processor and scatters over all processors
MPI_ALLTOALL sends data from all processes to all other processes
MPI_SENDRECV exchanges data between two processors -- often used to implement "shifts"
  • viewed as pure point to point by some



© Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Fri Aug 15 1997