Main Feature: All processors in the system are
directly connected to own memory and caches. Any processor cannot
directly access another processor's memory.
Each node has a network interface (NI).
All communication and synchronization between processors happens
via messages passed through the NI.
Since this approach uses messages for communication and
synchronization, it is often called message passing
architecture.
This architecture belongs to the MIMD model (Multiple Instruction
Stream, Multiple Data Stream).
A schematic view of the distributed memory approach is shown in
the figure below, where each processor has local memory and processors
(each denoted by P) communicate through an interconnection
network. Also, each processor has its own cache (denoted by $), which
as we will see, introduces interesting problems with coherence in a
shared memory environment.
Message passing is a parallel programming style used typically on
distributed memory machines of the above type. Processors communicate
using messages. Message passing is the most popular parallel
programming paradigm and the most widely used message passing
libraries are MPI and PVM.
A sketch of user-level send/receive message passing abstraction is
shown below:
A data transfer from one local address space to another occurs when a
send to a particular process is matched with a receive posted by that
process.