Parallel Compiler Runtime Consortium


PCRC Home Page

Partners
NPAC
Maryland
Indiana
Rochester
Texas, Austin
CSC
Rice
Florida

PCRC at NPAC
Kernel runtime
HPF compiler
HPJava/mpiJava
WebTop computing
Team

Documents

Software

Recent Accomplishments

Syracuse University have delivered the NPAC kernel runtime, which is a C++ class library implemented on top of the standard Message Passing Interface, MPI. The library includes classes implementing the underlying Distributed Array Descriptor (DAD). This descriptor defines how a logical multi-dimensional array is divided over the local memories of cooperating processors. The DAD has an extensible format that supports a superset of the High Performance Fortran (HPF) options for distributing arrays. In particular it supports all HPF distribution formats and alignment options, and all Fortran-90 (F90) regular array sections. The library provides address translation functions for these general arrays, simplifying conversion between global and local subscripts, and enumeration of locally held elements of arrays. It also provides an extensive set of collective data movement routines, and collective array arithmetic routines. These functions operate uniformly on arbitrary arrays describable by DADs. Communication patterns supported include HPF/F90 intrinsics such as CSHIFT and TRANSPOSE, regular-section copy operations, which copy elements between shape-conforming array sections regardless of source and destination mapping, a function that updates ghost areas of a distributed array, and various gather and scatter operations allowing irregular patterns of access. The library also provides essentially all F90 arithmetic transformational functions on distributed arrays and various additional HPF library functions. The kernel library has interface code to make it callable from Fortran 90, Fortran 77, C++ and Java.

In collaboration with the University of Peking and Harbin institute of Technology, Syracuse developed a research HPF compiler. The compiler generates code that uses the NPAC library to manage array access and communication. The front-end of this compiler was placed in the public domain (development on the complete translation system continues).

Motivated by the observation that directly calling the common runtime from SPMD application code (as opposed to having it called from compiler-generated code) provides a powerful and flexible parallel programming paradigm, but one which is superficially clumsy and inefficient because standard programming languages have no specific syntax for manipulating distributed arrays, Syracuse is developing a translator for a dialect of Java called HPJava. This is a language specifically designed for SPMD programming, with distributed arrays added as language primitives. By design, it is a language that can be preprocessed straightforwardly to standard Java with calls to the kernel runtime (distinguishing it from HPF, whose translation generally requires complex compiler analysis). Syracuse have also developed a direct interface to MPI from Java.

An important contribution from Maryland focusses on developing techniques for transferring data between distributed data structures owned by different data parallel runtimes libraries. These techniques have been implemented in the Maryland Meta-Chaos library, which is currently capable of transferring data between (distributed) arrays that have been created using High Performance Fortran, the Maryland CHAOS and Multiblock PARTI libraries, and the the Indiana pC++ runtime library, Tulip. For example, the library allows an irregularly distributed array created via the CHAOS library to be copied into an HPF distributed array (with an arbitrary HPF distribution) that has been created using the HPF compiler and runtime library. The only requirement on the libraries, so that Meta-Chaos can access their data, is that each library provide several inquiry functions.

Maryland have also been investigating mobility for Java programs in the context of Sumatra, an extension of the Java programming environment that provides a flexible substrate for adaptive mobile programs.

Indiana University initially emphasized support for HPC++ runtime in heterogeneous environments. The standard runtime system for HPC++ is called Tulip and it is designed for homogeneous parallel processing systems with support for shared memory. The goal for the PCRC project was to make HPC++ portable and interoperable with other systems. Recently they have developed Java compilation tools to support parallelism in Java and interoperability between Java and HPC++. In particular they have developed a Java preprocessor (JAVAR) that will allow Java programs that are written with loops and certain types of recursion to be transformed into multi-threaded code that executes in parallel on shared memory multiprocessors. They have also developed a parallel CORBA ORB called PARDIS which allows SPMD object implementations to communicate with other similar application using parallel communication channels.

The University of Texas have concentrated on ``productizing'' the Hierarchical Dynamic Distributed Array/Directed Acyclic Grid Hierarchy (HDDA/DAGH) infrastructure for parallel implementations of adaptive mesh computations, and preparing the HDDA/DAGH infrastructure for integration with the PCRC runtime systems. The HDDA implements a hierarchical dynamic distributed array. ``Hierarchical'' means that each element of an instance of an array can itself be an array. ``Dynamic'' means that the number of levels in the hierarchy of arrays can be varied during execution of the program. ``Distributed'' means that the array may be partitioned across multiple address spaces transparently to C or Fortran programs which operate on these arrays. DAGH is a layer of programming abstractions built upon the HDDA implementing a hierarchy of grids. It supports implementation of a considerable span of AMR and multi-grid solution methods including vertex-centered, cell-centered and edge-centered grids. Specific accomplishments include a release in May 1997 of HDDA/DAGH with additional capabilities including face-centered and vertex-centered grids.

The University of Rochester made progress towards their goal of building a parallelizing compiler and runtime system for distributed shared-memory machines. They studied the design of the new Compiler Runtime Systems in the context of data transformations and optimizations for distributed shared-memory machines. Their work involves various aspects: locality optimizations, scheduling for network of workstations, and parallel and distributed data mining. They are also building a Java compiler.

Florida University was closely involved with the NPAC work, in particular in the HPF application suite and in development of advanced algorithms for HPF library functions. Recently they developed an MPI version of their algorithms for the important combining scatter family of operations. Syracuse provided interface code to make these routines callable using the generic DAD array descriptors employed by the NPAC kernel library.

Cooperating Systems contributed primarily in a support role, providing software test, evaluation, and documentation support to other consortium members (particularly NPAC). Additionally, CSC has played a leading role in two primary PCRC contract tasks: derivation of a Common Code and Data Descriptor for Arrays in HPC Languages and compiling the Common Compiler and Data Movement Interface Specification.


Bryan Carpenter, (dbc@csit.fsu.edu). Last updated May 2000. About these Web pages.