Basic HTML version of Foils prepared Sept 30 98

Foil 21 Hybrid Parallel Cluster Computing Model

From A Coarse Grain Performance Estimator -- PetaSIM and a Hybrid Distributed Object Model for the IPG NASA Workshop on Performance Engineered Information Systems -- Sept 28-29 98. by Geoffrey C. Fox, Yuhong Wen, Wojtek Furmanski, Tom Haupt


1 Backend Parallel Computing Nodes running Classic HPCC -- MPI becomes Globus to use distributed resources
2 Middle Control Tier -- JWORB runs on all Nodes
3 SPMD Program
4 SPMD Program
5 SPMD Program
6 SPMD Program
7 MPI
8 JWORB
9 JWORB
10 JWORB
11 JWORB
12 RTI
13 Use separation of control and data transfer
14 to support RTI(IIOP) on control layer and MPI
15 on fast transport layer simultaneously
16 RTI
17 RTI
18 MPI
19 MPI
20 Middle and Backend on Each Node

in Table To:


© Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Sat Nov 28 1998