HTML version of Scripted Foils prepared 11 November 1996

Foil 10 Collective Computation

From CPS615-Completion of MPI foilset and Application to Jacobi Iteration in 2D Delivered Lectures of CPS615 Basic Simulation Track for Computational Science -- 7 November 96. by Geoffrey C. Fox *
Secs 315.3
1 One can often perform computing during a collective communication
2 MPI_REDUCE performs reduction operation of type chosen from
  • maximum(value or value and location), minimum(value or value and location), sum, product, logical and/or/xor, bit-wise and/or/xor
  • e.g. operation labelled MPI_MAX stores in location result of processor rank the global maximum of original in each processor as in
  • call MPI_REDUCE(original,result,1,MPI_REAL,MPI_MAX,rank,comm,ierror)
  • One can also supply one's own reduction function
3 MPI_ALLREDUCE is as MPI_REDUCE but stores result in all -- not just one -- processors
4 MPI_SCAN performs reductions with result for processor r depending on data in processors 0 to r

Table Font Size


© Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Fri Aug 15 1997