Basic HTML version of Foils prepared 13 February 2000

Foil 47 Pragmatic Computational Science January 2000 I

From Methodology of Computational Science CPS615 Computational Science -- Spring Semester 2000. by Geoffrey C. Fox


So here is a recipe for developing HPCC (parallel) applications as of January 2000
Use MPI for data parallel distributed memory programs as alternatives are HPF, HPC++ or parallelizing compilers today
  • Neither HPF or HPC++ has clear long term future for implementations -- ideas are sound
  • MPI will run on PC clusters as well as customized parallel machines -- parallelizing compilers will not work on distributed memory machines
Use openMP or MPI on shared (distributed shared) memory machines
  • If successful (high enough performance), openMP obviously easiest
Pleasingly Parallel problems can use MPI or web/metacomputing technology



© Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Thu Mar 16 2000