Principles of Parallel Computing and Limitations

Principles of Parallel Computing

----> Making parallel programming harder than sequential.


On Parallelism

Definition: (Due to Almasi and Gottlieb 1989) A parallel computer is a "collection of processing elements that communicate and cooperate to solve large problems fast."

These processing elements don't have to be as one large and expensive parallel machine but can also be a cluster of personal computers or workstations communicating and cooperating to tackle a specific computational problem or application.

Here are few of the main reasons for looking into parallelism:

Our focus here is on the multiprocessor level of parallelism.


What limits the performance of a parallel program?


Schematic View of A Generic Multiprocessor

The following diagram shows a collection of complete computers including one or more processors (P) each with its own cache ($) and memory (M) communicating through a general-purpose, high-performance, scalable interconnect. Also as shown, each node contain a communication assistant (CA) that assists in communication across the network.

Another schematic view of a parallel processor is shown below:

Each processor has its own cache as well as a second level cache.