Cornell Theory Center


Presentation:
MPI Collective Communication II

10/98

This is the presentation layer of a two-part module. For an explanation of the layers and how to navigate within and between them, return to the top page of this module.


Table of Contents

  1. Advanced features
  2. Scatter vs. scatterv
    2.1 Scatter
    2.2 Scatterv
  3. MPI_Scatterv syntax
  4. MPI_Gatherv syntax
  5. When might these calls be useful?
    5.1 Example Problem
    5.2 Example Solution

References Lab Exercises Evaluation

[Table of Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [More Detail]


1. Advanced features

MPI_Scatterv, MPI_Gatherv, MPI_Allgatherv, MPI_Alltoallv

What does the "v" stand for?
varying -- size, relative location of messages

Examples for discussion: MPI_Gatherv and MPI_Scatterv

Advantages
  • more flexibility in writing code
  • less need to copy data into temporary buffers
  • more compact final code
  • vendor's implementation may be optimal
    (if not, may be trading performance for convenience)
  • [Table of Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [More Detail]


    2. Scatter vs. Scatterv

    2.1 Scatter

    Purpose of scatter operation:
  • to send different chunks of data to different processes
  • not the same as broadcast (same chunk goes to all)
  • Scatter requires contiguous data, uniform message size


    2.2 Scatterv

    Extra capabilities in scatterv:
  • gaps allowed between messages in source data
    (but individual messages must still be contiguous)
  • irregular message sizes allowed
  • data can be distributed to processes in any order
  • [Table of Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [More Detail]


    3. MPI_Scatterv syntax

    INTEGER SENDCOUNTS(0:NPROC-1), DISPLS(0:NPROC-1)
    ...
    CALL MPI_SCATTERV
    ( SENDBUF, SENDCOUNTS, DISPLS, SENDTYPE,
    RECVBUF, RECVCOUNT, RECVTYPE,
    ROOT, COMM, IERROR )

    SENDCOUNTS(I) is the number of items of type SENDTYPE
    to send from process ROOT to process I. Defined on ROOT.

    DISPLS(I) is the displacement from SENDBUF to the
    beginning of the I-th message, in units of SENDTYPE. Defined on ROOT.

    C syntax and details in MPI Collective Communication

    [Table of Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [More Detail]


    4. MPI_Gatherv syntax

    Gather and gatherv are exact inverses of scatter, scatterv

    Simply reverse the directions of arrows in previous figures

    INTEGER RECVCOUNTS(0:NPROC-1), DISPLS(0:NPROC-1)
    ...
    CALL MPI_GATHERV
    ( SENDBUF, SENDCOUNT, SENDTYPE,
    RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE,
    ROOT, COMM, IERROR )

    RECVCOUNTS(I) and DISPLS(I) indicate where data
    arriving from process I is to be placed on ROOT, relative to RECVBUF.
    Defined on ROOT.

    [Table of Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [More Detail]


    5. When might these calls be useful?

    5.1 Example Problem

    Example:
    Process 0 reads the initial data and distributes it to
    other processes, according to the domain decomposition.



    Problem:
    What if the number of points can't be divided evenly?


    5.2 Example Solution

    Solution: Use MPI_Scatterv with arrays as follows...
    (Assume NPTS is not evenly divisible by NPROCS)

    NMIN = NPTS/NPROCS
    NEXTRA = MOD(NPTS,NPROCS)
    K = 0
    DO I = 0, NPROCS-1
      IF (I .LT. NEXTRA) THEN
        SENDCOUNTS(I) = NMIN + 1
      ELSE
        SENDCOUNTS(I) = NMIN
      END IF
      DISPLS(I) = K
      K = K + SENDCOUNTS(I)
    END DO
    CALL MPI_SCATTERV(SENDBUF, SENDCOUNTS, DISPLS ...)

    [Table of Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [More Detail]


    References

    Message Passing Interface Forum (June 1995) MPI: A Message Passing Interface Standard.

    CTC's MPI Documentation

    [Example] A sampling of programs available on the Web that illustrate commands covered in this module.

    [FAQ] Frequently Asked Questions


    [Exercise] Lab exercises for MPI Collective Communication II

    [Evaluation] Please complete this short evaluation form. Thank you!


    [CTC Home Page] [Search] [Education]
    [Copyright Statement] [Feedback] [Resources]
    URL http://www.tc.cornell.edu/Edu/Talks/MPI/Collective.II/less.html