Cornell Theory Center
Presentation:
Basics of MPI Programming
10/98
This is the presentation layer of a two-part module. For an explanation of the layers and how to navigate within and between them, return to the top page of this module.
Table of Contents
References
Lab Exercises
Evaluation
1. Overview
1.1 What Is MPI?
- Message Passing Interface standard
- The first standard and portable message passing library with
good performance
- "Standard" by consensus of MPI Forum participants from
over 40 organizations
- Finished and published in May 1994, updated in June 1995
-
MPI 2 complete as of July 1997. Extends (does not change) MPI.
- Documents that define MPI are on the World Wide Web at
Argonne National Lab
1.2 What does MPI offer?
- Standardization - on many levels
- Portability - to existing and new systems
- Performance - comparable to vendors' proprietary libraries
- Richness - extensive functionality, many quality implementations
1.3 MPI at the Cornell Theory Center
1.4 How to Use MPI
- When possible, start with a debugged serial version
- Design parallel algorithm
- Write code, making calls to MPI library
- Compile and run using IBM's Parallel Environment
(covered in the IBM SP Parallel Environment module)
- Run with a few nodes first, increase number gradually
2. MPI programs
2.1 Format of MPI routines
- C bindings
- rc = MPI_Xxxxx(parameter, ... )
- rc is error code, = MPI_SUCCESS if successful
- Fortran bindings
- call MPI_XXXXX(parameter,..., ierror)
- case insensitive
- ierror is error code, = MPI_SUCCESS if successful
- Exception: timing functions return double precision reals
- Header file required
- #include "mpi.h" for C programs
- include 'mpif.h' for Fortran programs
2.2 MPI routines
- MPI has over 125 functions, but you can do much with just 6:
- Initialize for communications
-
- MPI_INIT: initializes the MPI environment
- MPI_COMM_SIZE: returns the number of processes
- MPI_COMM_RANK: returns this process's number (rank)
- Communicate to share data between processes
-
- MPI_SEND: sends a message
- MPI_RECV: receives a message
- Exit in a "clean" fashion when done communicating
-
- MPI_FINALIZE
2.3 An MPI sample program (Fortran)
Shows 6 basic calls
Click here for C version
program hello
include 'mpif.h'
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
character(12) message
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
tag = 100
if (rank .eq. 0) then
message = 'Hello, world'
do i=1, size-1
call MPI_SEND(message, 12, MPI_CHARACTER, i, tag,
& MPI_COMM_WORLD, ierror)
enddo
else
call MPI_RECV(message, 12, MPI_CHARACTER, 0, tag,
& MPI_COMM_WORLD, status, ierror)
endif
print*, 'node', rank, ':', message
call MPI_FINALIZE(ierror)
end
3. MPI messages
Message = data + envelope
call MPI_SEND(startbuf,count,datatype,
\ | /
---DATA---
dest,tag,comm, ierror)
\ | /
ENVELOPE
3.1 Data
3.2 Envelope
Arguments
- Destination or source
- rank in a communicator
- receive = sender or MPI_ANY_SOURCE
- Tag
- integer chosen by programmer
- receive = sender or MPI_ANY_TAG
- Communicator
- defines communication "space"
- receive = send
- Analogy: bill collection agency
4. Communicators
4.1 Why have communicators?
- If you are writing a library, can you choose safe tags?
- Example: variable (and possibly incorrect)
behavior if no communicator
Animation
4.2 Communicators and process groups
- Programmers can define a process group
- And define new communicator(s) for the process group
5. Summary
- MPI program: 6 basic calls
- MPI_INIT
- MPI_COMM_RANK
- MPI_COMM_SIZE
- MPI_SEND
- MPI_RECV
- MPI_FINALIZE
- MPI messages
- data (startbuf, count, datatype)
- envelope (destination/source, tag, communicator)
- Communicators
References
Message Passing Interface Forum (1995) MPI: A Message Passing
Interface Standard. June 12, 1995. Available
online or in
postscript.
IBM manual for their implementation of MPI is available
online
CTC's MPI Documentation
Lab exercises for MPI Basics
Please complete this short evaluation form. Thank you!
URL http://www.tc.cornell.edu/Edu/Talks/MPI/Basic/less.html