why parallel architecture?
What is and Why Parallel Architecture
Definition: (Due to Almasi and Gottlieb 1989) A parallel
computer is a "collection of processing elements that communicate and
cooperate to solve large problems fast."
These processing elements don't have to be as one large and expensive
parallel machine but can also be a cluster of personal computers or
workstations or both communicating and cooperating to tackle a
specific computational problem or application.
A brief answer to the question of why parallelism is:
- It provides an alternative computing paradigm to faster clock
for an increase in performance.
- It applies at all levels of system design.
- Is an interesting perspective from which to view architecture.
- Is increasingly central in information processing.
- Parallelism is exploited at many levels:
As mentioned above, our focus here is on the multiprocessor level of
parallelism.
The Role of Parallelism
- Application trends: Continuous need for computing cycles for two
kinds of computing:
- Scientific Computing: CFD, Weather Simulation, Ocean Current
Simulation, Simulating Galaxy Evolution, and other problems ...
- General-purpose Computing: Graphics, Databases, etc.
- Technology Trends: While number of transistors per chip is growing
rapidly, the clock rate are and expected to go up only slowly.
- Architecture Trends: While instruction-level parallelism is
limited, coarse-level parallelism, as in multiprocessors, is the most
viable approach now.
- Economics.
- Current Trends:
- Current microprocessors have the capacity for multiprocessing
support.
- Servers, workstations, and PCs are networked to become
multiprocessors. Notable examples are:
- Future Trends: microprocessors will continue to form
multiprocessors.