Data is propagated between processors via messages which can be divided into packets but at MPI level we only see logically single complete messages
|
The building block is Point to Point Communication with one processor sending information and one other receiving it
|
Collective communication involves more than one message
-
Broadcast of one processor to group of other processors (call a multicast on Internet when restricted to group)
-
Synchronization or barrier
-
Exchange of Information between processors
-
Reduction Operation such as a Global Sum
|
Collective Communication can ALWAYS be implemented in terms of elementary point to point communications but is provided
-
for user convenience and
-
often the best algorithm is hardware dependent and "official" MPI collective routines ought to be faster than portable implementation in terms of point to point primitive routines.
|