next up previous
Next: Parallel Languages Up: No Title Previous: High Performance Computing

Parallel Computing

Parallel computers have two different models for accessing data, and two different models for accessing instructions: [2, 5]

Over the last 10 years we have learned that parallel computing works - the majority of computationally intensive applications perform well on parallel computers, by taking advantage of the simple idea of ``data parallelism'' (or domain decomposition), which means obtaining concurrency by applying the particular algorithm to different sections of the data set concurrently. [6] Data parallel applications are scalable to larger numbers of processors for larger amounts of data.

Another type of parallelism is ``functional parallelism'', where different processors (or even different computers) perform different functions, or different parts of the algorithm. Here the speed-ups obtained are usually more modest and this method is often not scalable, however it is important, particularly in multidisciplinary applications.

Surveys of problems in computational science [11] have shown that the vast majority (over tex2html_wrap_inline488 ) of applications can be run effectively on MIMD parallel computers, and approximately tex2html_wrap_inline490 on SIMD machines (probably less for commercial, rather than academic, problems). Currently there are many different parallel architectures, but only one - a distributed memory MIMD multicomputer - is general, high performance architecture which is known to scale from one to very many processors.



Geoffrey Fox, Northeast Parallel Architectures Center at Syracuse University, gcf@npac.syr.edu