AVS is an advanced software development toolkit, created by Stardent Computer, Inc. and now marketed and further developed by AVS, Inc. The system was initially focused on advanced scientific visualization tasks, often involing large datasets and long sequences of transforms such as filtering, scaling, rendering, etc., required to convert raw data to graphics object, images or animations. AVS employs the modular, object based dataflow model to handle such tasks. A set of base communication objects is identified such as numbers, fields(n-dimensional arrays) or colarmaps and a collection of computational modules is provided that transform sets of input objects into sets of output objects. AVS itself offers an extensive library of ready to use modules, as well as it specifies the protocol for developing and including user modules into the system. Each module is represented as an individual UNIX process which reads some objects from the input ports and writes some objects into output ports. Specific applications are assembled from reusable modules with the aid of visual network editing tools. Modules are visualized as nodes, ports as terminals, and communication objects as links of an interactively constructed datafolw graph. Each AVS application includes AVS kernel and a set of modules, specified by the dataflow network. AVS kernel integrates the network specification and instantiates module processes as well as the lower level inter-modular communication links, based on the RPC/XDR UNIX networking protocol. Initially, the AVS techniques were usually applied to the off-line data analysis, in which the raw datasets were first generated on supercomputers, and then visualized using the dataflow network editing tools. Starting from AVS 4.0, the system supports remote module execution so that various nodes of computational graph can be installed on different machines on the network In consequence, AVS can be now natually employed to efficiently generate large datasets as well, and thus to integrate the computational and visualization components of real time simulations, running in the high performance distributed computing mode. However, in the course of this analysis, we also identified several deficiencies of this system, when projected We will develop an adaptive execution environment to load balance and manage the resources required for executing applications developed with the AVS-like tool. Our approach to implement such an environment is based on two main components: the Resource Manager and Execution Manager. the RM is responsible for allocating the resources need to execute a given distributed applications. It manages all the information required to run a given distributed application on a hetergenerous computers and the location of its executable codes. The execution mamanger is responsible for loading the application modeuls to the specified computers such that the application performance is optimied and system load is well balanced. We will integrate this resouirce manager with the adaptive partitioners. aimed at the decomposition of loosely synchronous problems onto homogeneous parallel machines. These partitioners offer a variety of methods including physical optimization such as simulated annealing and problem specific heuristics. This system inetegrates these techniquies with expert systemns to tackle metaproblems decomposed onto metacomputers. The computing resources of a high performance distributed computing environment range from high performance workstations to parallel machines, and supercomputers that are interconnected by high speed networks (e.g., NYNET distributed computing environment). In such an environment, most of the enormpos computing capacity is untapped because of an inability to share computing resources efficiently. A higher degree of performance can be achieved if computers share the network load in a manner that better utilizes the resources available. The focus of this project is to develop a metacomputer over the NYNET where the computations of a metaproblem collection of complex compute intensive subproblems each requiring different parallel programming paradigms can be decomposed and run on NYNET high performance computers at NPAC. SIMD and MIMD supercomputers from different vendors and with different parallel hardware and software capacities, and interconnected by high speed networks of different capacities (e.g., ATM,HIPPI,FDDI and Ethernet) will become individual heterogeneous processing nodes of the metacomputer. Each node of this metacomputer performs one or a group of computing projects of a large application which best suits its capacity in term of both the parallel software programming environment and hardware architecture. For example, in one large application, we may use the Intel iPSC/860 and IBM SP2 to perform the most computationally intensive components, the two DECmpp-12000 for parallel image processing such as 3D rendering and compression, the CM5 for well-vectorized components and parallel I/O, and the nCUBE with its parallel Oracle RDBMS for database prcessing and querying. AVS has an attractive user interface but has at least two serious problems. First, it has a centralized control which sequentializes inter-module communication. Thus, if we use AVS to control two or more partitions on the SP2, the current software will use the common Ethernet not the high performance switch, to communicate between modules. Similar remarks can be made about any parallel machines or network with multiple paths between modules where non-trival optimization is needed for intermodule traffic. The SP2 is a particularly interesting target as it is designed as to support both MPP and multiple workstation images(modes); however IBM has no clear plan to implement this vision. A second problem with the current AVS is the difficulty in implementing dynamic linkages between modules. We will retain the strongest features of AVS and the ability to use existing AVS modules but augment the system to correct both these deficiencies. To circumvent the first problem we will integrate AVS with message passing packages line PVM,P4 or MPL. We understand how to support Fortran-M (ANL-CALtech) as part of this environment and will investigate integrating other systems that support programming paradigms and parallelism. In particular, we will look carefully at the related C++ system supporting a more dynamic (than Fortran-M) tasking environment.