Given by Geoffrey C. Fox at Delivered Lectures of CPS615 Basic Simulation Track for Computational Science on 31 October 96. Foils prepared 11 November 1996
Outside Index
Summary of Material
Secs 74.8
This covers MPI from a user's point of view and is meant to be a supplement to other online resources from MPI Forum, David Walker's Tutorial, Ian Foster's "Designing and Building Parallel Programs", Gropp,Lusk and Skjellum "Using MPI" |
An Overview is based on subset of 6 routines that cover send/receive, environment inquiry (for rank and total number of processors) initialize and finalization with simple examples |
Processor Groups, Collective Communication and Computation and Derived Datatypes are also discussed |
Outside Index Summary of Material
Geoffrey Fox |
NPAC |
Room 3-131 CST |
111 College Place |
Syracuse NY 13244-4100 |
This covers MPI from a user's point of view and is meant to be a supplement to other online resources from MPI Forum, David Walker's Tutorial, Ian Foster's "Designing and Building Parallel Programs", Gropp,Lusk and Skjellum "Using MPI" |
An Overview is based on subset of 6 routines that cover send/receive, environment inquiry (for rank and total number of processors) initialize and finalization with simple examples |
Processor Groups, Collective Communication and Computation and Derived Datatypes are also discussed |
MPI collected ideas from many previous message passing systems and put them into a "standard" so we could write portable (runs on all current machines) and scalable (runs on future machines we can think of) parallel software |
MPI agreed May 1994 after a process that began with a workshop in April 1992 |
MPI plays same role to message passing systems that HPF does to data parallel languages |
BUT whereas MPI has essentially all one could want -- as message passing fully understood |
HPF will still evolve as many unsolved data parallel compiler issues
|
HPF runs on SIMD and MIMD machines and is high level as it expresses a style of programming or problem architecture |
MPI runs on MIMD machines (in principle it could run on SIMD but unnatural and inefficient) -- it expresses a machine architecture |
Traditional Software Model is
|
So in this analogy MPI is universal "machine-language" of Parallel processing |
Point to Point Message Passing |
Collective Communication -- messages between >2 simultaneous processes |
Support for Process Groups -- messaging in subsets of processors |
Support for communication contexts -- general specification of message labels and ensuring unique to a set of routines as in a precompiled library
|
Support for application (virtual) topologies analogous to distribution types in HPF |
Inquiry routines to find out about environment such as number of processors |
Kitchen Sink has 129 functions and each has many arguments
|
It is not a complete operating environment and does not have ability to create and spawn processes etc. |
PVM is previous dominant approach
|
MPI outside distributed computing world with HTTP of the Web, ATM protocols and systems like ISIS from Cornell |
However it does look as though MPI is being adopted as general messaging system by parallel computer vendors |
We find a good example when we consider typical Matrix Algorithm |
(matrix multiplication) |
A i,j = Sk B i,k C k,j |
summed over k'th column of B and k'th row of C |
Consider a square decomposition of 16 by 16 matrices B and C as for Laplace's equation. (Good Choice) |
Each operation involvea a subset(group) of 4 processors |
All MPI routines are prefixed by MPI_
|
MPI constants are in upper case as MPI datatype MPI_FLOAT for floating point number in C |
Specify overall constants with
|
C routines are actually integer functions and always return a status (error) code |
Fortran routines are really subroutines and have returned status code as argument
|
There a set of predefined constants in include files for each language and these include: |
MPI_SUCCESS -- succesful return code |
MPI_COMM_WORLD (everything) and MPI_COMM_SELF(current process) are predefined reserved communicators in C and Fortran |
Fortran elementary datatypes are: |
MPI_INTEGER, MPI_REAL, MPI_DOUBLE_PRECISION, MPI_COMPLEX, MPI_DOUBLE_COMPLEX, MPI_LOGICAL, MPI_CHARACTER, MPI_BYTE, MPI_PACKED |
C elementary datatypes are: |
MPI_CHAR, MPI_SHORT, MPI_INT, MPI_LONG, MPI_UNSIGNED_CHAR, MPI_UNSIGNED_SHORT, MPI_UNSIGNED, MPI_UNSIGNED_LONG, MPI_FLOAT, MPI_DOUBLE, MPI_LONG_DOUBLE, MPI_BYTE, MPI_PACKED |
call MPI_INIT(mpierr) -- initialize |
call MPI_COMM_RANK (comm,rank,mpierr) -- find processor label (rank) in group |
call MPI_COMM_SIZE(comm,size,mpierr) -- find total number of processors |
call MPI_SEND (sndbuf,count,datatype,dest,tag,comm,mpierr) -- send a message |
call MPI_RECV (recvbuf,count,datatype,source,tag,comm,status,mpierr) -- receive a message |
call MPI_FINALIZE(mpierr) -- End Up |
This MUST be called to set up MPI before any other MPI routines may be called |
For C: int MPI_Init(int *argc, char **argv[])
|
For Fortran: call MPI_INIT(mpierr)
|
This allows you to identify each processor by a unique integer called the rank which runs from 0 to N-1 where there are N processors |
If we divide the region 0 to 1 by domain decomposition into N parts, the processor with rank r controls
|
for C:int MPI_Comm_rank(MPI_Comm comm, int *rank)
|
for FORTRAN: call MPI_COMM_RANK (comm,rank,mpierr) |
This returns in integer size number of processes in given communicator comm (remember this specifies processor group) |
For C: int MPI_Comm_size(MPI_Comm comm,int *size) |
For Fortran: call MPI_COMM_SIZE (comm,size,mpierr)
|
Before exiting, an MPI application it is courteous to clean up the MPI state and MPI_FINALIZE does this. No MPI routine may be called in a given process after that process has called MPI_FINALIZE |
for C: int MPI_Finalize() |
for Fortran: MPI_FINALIZE(mpierr)
|
#include <stdio.h> |
#include <mpi.h> |
void main(int argc,char *argv[]) |
{
|
} |
Parallel I/O has technical issues -- how best to optimize access to a file whose contents may be stored on N different disks which can deliver data in parallel and |
Semantic issues -- what does printf in C (and PRINT in Fortran) mean? |
The meaning of printf/PRINT is both undefined and changing
|
Today, memory costs have declined and ALL mainstream MIMD distributed memory machines whether clusters of workstations or integrated systems such as T3D/ Paragon/ SP-2 have enough memory on each node to run UNIX |
Thus printf today means typically that the node on which it runs will stick it out on "standard output" file for that node
|
New MPI-IO initiative will link I/O to MPI in a standard fashion |
Data is propagated between processors via messages which can be divided into packets but at MPI level we only see logically single complete messages |
The building block is Point to Point Communication with one processor sending information and one other receiving it |
Collective communication involves more than one message
|
Collective Communication can ALWAYS be implemented in terms of elementary point to point communications but is provided
|
Communication is between two processors and receiving process must expect a message although can be uncertain asto message type and sending process |
Information required to specify a message includes
|
Two types of communication operations applicable to send and receive
|
In addition four types of send operation
|
call MPI_SEND (
|
Fortran example:
|
call MPI_SEND (sndbuf,count,datatype,dest,tag,comm,mpierr) |
call MPI_RECV(
|
Note that return_status is used after completion of receive to find actual received length (buffer_len is a MAXIMUM), actual source processor rank and actual message tag |
In C syntax is |
int error_message=MPI_Recv(void *start_of_buffer,int buffer_len, MPI_DATATYPE datatype, int source_rank, int tag, MPI_Comm communicator, MPI_Status *return_status) |