previous next up
Previous: C-3.4. Computation library Next: C-3.6. A three years Up: C-3. Outline of working

C-3.5. Communication manager

There are two kinds of parallism in this interpreter based parallel runtime syst em support, task parallism and data parallism, eash suitable for some specific parallel processing applications respectively. When the scheduler module determines that there exists task parallism in the interpretive frames, then the communi cation manager module is aimed to provide the support for global computation abi lity. This will be the base of task migration in scheduler module. All the tasks migrated to other computers and the results concated will be through this commu nication manager.

When facing the task parallism applications, the communication manager module is aimed to provide a mechanism for migrating a task to another machine through ne twork, and then concate the results back. PVM and MPI have provided the basic fu nctions for scientific computing through message passing mechanism. Here in our communication manager, we will address the problem of supporting global INTERNET scientific computation. Besides the standard MPI message passing functions, we should provide the following aspects of communication:

There will be a little bit difference when facing the data parallism in the applications. When supporting the data parallism computing, in order to achieve portable and efficient high-level algorithms, we will rely on a rich set of communication primitives that can be efficiently implemented on a variety of parallel machines and that represent a majority of the data communication needed in high-performance codes. By using communication primitives, algorithmic designers and implementors, typically experts in fields other than computer science (e.g., physics, molecular biology, earth science, oceanography, meteorology, biochemistry, genetics) need not learn or understand the details of the underlying (native) communication protocols, such as message passing or direct memory access (DMA) transfers, and their artifacts such as cache management, but instead can focus more attention on formulating high-performance codes in that scientist's domain of expertise. In addition, as machine architectures evolve and obsolete machines are replaced by new, state-of-the-art computing platforms, high-level coding techniques will greatly benefit scientists, since programs will not need to be rewritten nor will major tuning efforts be necessary on the new platforms.

The precompiled parallel runtime support library functions will be mostly based ob this kind of data parallism communication primitives. The main communication issues will be:

Besides the standard features of communication system support, we will also address the problem of communication support for interpreter based system. There may be difference on the buffer management, and some other implementation techniques, for the interpreter system has somewhat the "real-time" system requirement.


previous next up
Previous: C-3.4. Computation library Next: C-3.6. A three years Up: C-3. Outline of working

Xiaoming Li
Tue Feb 11 08:54:45 EST 1997