Find this at http://www.npac.syr.edu/users/gcf/cps615mpi98/

MPI Message Passing Interface

Given by Geoffrey C. Fox, Nancy McCracken at Computational Science for Simulations on Fall Semester 1998. Foils prepared 19 September 98

This covers MPI from a user's point of view and is meant to be a supplement to other online resources from the MPI Forum, David Walker's Tutorial, Ian Foster's Book "Designing and Building Parallel Programs", Gropp,Lusk and Skjellum "Using MPI"
An Overview is based on subset of 6 routines that cover send/receive, environment inquiry (for rank and total number of processors) initialize and finalization with simple examples
Processor Groups, Collective Communication and Computation, Topologies, and Derived Datatypes are also discussed


Table of Contents for MPI Message Passing Interface


001 CPS615 Introduction to Computational Science The Message Passing 
    Interface MPI
002 Abstract of MPI Presentation
003 MPI Overview -- Comparison with HPF -- I
004 MPI Overview -- Comparison with HPF -- II
005 Some Key Features of MPI
006 What is MPI?
007 History of MPI
008 Who Designed MPI?
009 Some Difficulties with MPI
010 Sending/Receiving Messages:  Issues
011 What Gets Sent: The Buffer
012 Generalizing the Buffer in MPI
013 Advantages of Datatypes
014 To Whom It Gets Sent:  Process Identifiers
015 Generalizing the Process Identifier in MPI
016 Why use Process Groups?
017 How It Is Identified:  Message Tags
018 Sample Program using Library
019 Correct Library Execution
020 Incorrect Library Execution
021 What Happened?
022 Solution to the Tag Problem
023 MPI Conventions
024 Standard Constants in MPI
025 The Six Fundamental MPI routines
026 MPI_Init -- Environment Management
027 MPI_Comm_rank -- Environment Inquiry
028 MPI_Comm_size -- Environment Inquiry
029 MPI_Finalize -- Environment Management
030 Hello World in C plus MPI
031 Comments on Parallel Input/Output - I
032 Comments on Parallel Input/Output - II
033 Blocking Send: MPI_Send(C)  or MPI_SEND(Fortran)
034 Example MPI_SEND in Fortran
035 Blocking Receive: MPI_RECV(Fortran)
036 Blocking Receive:  MPI_Recv(C)
037 Fortran example: Receive
038 Hello World:C Example of Send and Receive
039 HelloWorld, continued
040 Interpretation of Returned Message Status
041 Collective Communication
042 Some Collective Communication Operations
043 Hello World:C Example of Broadcast
044 Collective Computation
045 Examples of Collective Communication/Computation
046 Collective Computation Patterns
047 More Examples of Collective Communication/Computation
048 Data Movement (1)
049 Examples of MPI_ALLTOALL
050 Data Movement (2)
051 List of Collective Routines
052 Example Fortran: Performing a Sum
053 Example C:  Computing Pi
054 Pi Example continued
055 Buffering Issues
056 Avoiding Buffering Costs
057 Combining Blocking and Send Modes
058 Cartesian Topologies
059 Defining a Cartesian Topology
060 MPI_Cart_coords or Who am I?
061 Who are my neighbors?
062 Periodic meshes
063 Motivation for Derived Datatypes in MPI
064 Derived Datatype Basics
065 Simple Example of Derived Datatype
066 Derived Datatypes:  Vectors
067 Example of Vector type
068 Why is this interesting?
069 Use of Derived Types in Jacobi Iteration
070 Derived Datatypes:  Indexed
071 Designing MPI Programs
072 Jacobi Iteration: The Problem
073 Jacobi Iteration:  MPI Program Design
074 Jacobi Iteration:  MPI Program Design
075 Jacobi Iteration: Fortran MPI Program
076 Jacobi Iteration:  create topology
077 Jacobi iteration:  data structures
078 Jacobi Iteration: send guard values
079 Jacobi Iteration:  update and error
080 The MPI Timer
081 MPI-2
082 I/O included in MPI-2


© Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Sun Apr 11 1999