next up previous
Next: Java and High-Performance Computing Up: Commodity Parallel Computing Previous: Commodity Parallel Computing

High-Performance Commodity Communication

Consider first two views of a parallel computer. In both, various nodes and the host are depicted as separate entities. These represent logically distinct functions, but the physical implementation need not reflect the distinct services. In particular, two or more capabilities can be implemented on the same sequential or shared-memory multiprocessor system.

Figure 1.5 presents a simple multitier view with commodity protocols (HTTP, RMI, COM, or the IIOP pictured) to access the parallel computer as a single entity. This entity (object) delivers high performance by running classic HPCC technologies (such as HPF, PVM, or the pictured MPI) in the third tier. This approach has been successfully implemented by many groups [27] to provide parallel computing systems with important commodity services based on Java and JavaScript client interfaces. Nevertheless, the approach addresses the parallel computer only as a single object and is, in effect, the ``host-node" model of parallel programming [31]; the distributed computing support of commodity technologies for parallel programming is not exploited.

 
Figure 1.5: A parallel computer viewed as a single CORBA object in a classic host-node computing model. Logically, the host is in the middle tier and the nodes in the lower tier. The physical architecture could differ from the logical architecture. 

Figure 1.6 depicts the parallel computer as a distributed system with a fast network and integrated architecture. Each node of the parallel computer runs a CORBA ORB (or, perhaps more precisely, a stripped-down ORBlet), Web server, or equivalent commodity server. In this model, commodity protocols can operate both internally and externally to the parallel machine. The result is a powerful environment where one can uniformly address the full range of commodity and high-performance services. Other tools can now be applied to parallel as well as distributed computing.

 
Figure 1.6: Each node of a parallel computer instantiated as a CORBA object. The ``host" is logically a separate CORBA object but could be instantiated on the same computer as one or more of the nodes. Via a protocol bridge, one could address objects using CORBA with local parallel computing nodes invoking MPI and remote accesses using CORBA where its functionality (access to many services) is valuable.  

Obviously, one should be concerned that the flexibility of this second parallel computer is accompanied by a reduction in communication performance. Indeed, most commodity messaging protocols (e.g., RMI, IIOP, and HTTP) have unacceptable performance for most parallel computing applications. However, good performance can be obtained by using a suitable binding of MPI or other high-speed communication library to the commodity protocols.

In Figure 1.7, we illustrate such an approach to high performance, which uses a separation between messaging interface and implementation. The bridge shown in this figure allows a given invocation syntax to support several messaging services with different performance-functionality tradeoffs. In principle, each service can be accessed by any applicable protocol. For instance, a Web server or database can be accessed by HTTP or CORBA; a network server or distributed computing resource can support HTTP, CORBA, or MPI.

Note that MPI and CORBA can be linked in one of two ways: (1) the MPI function call can call a CORBA stub, or (2) a CORBA invocation can be trapped and replaced by an optimized MPI implementation. Current investigations of a Java MPI linkage have raised questions about extending MPI to handle more general object data types. One could, for instance, extend the MPI communicator field to indicate a preferred protocol implementation, as is done in Nexus [33]. Other research issues focus on efficient object serialization needed for a high-performance implementation of the concept in Figure 1.7.

 
Figure 1.7: A message optimization bridge allows MPI (or equivalently Globus or PVM) and commodity technologies to coexist with a seamless user interface.  


next up previous
Next: Java and High-Performance Computing Up: Commodity Parallel Computing Previous: Commodity Parallel Computing

Theresa Canzian
Fri Mar 13 01:17:33 EST 1998