CONTINUING VISION

As we move into Year 3 of the CEWES MSRC PET effort, the continuing vision in the eight technical support areas is as follows:

-----------------------------------------------------------------------------

CFD: Computational Fluid Dynamics CTA

-----------------------------------------------------------------------------

CSM: Computational Structural Mechanics CTA

------------------------------------------------------------------------------

CWO: Climate/Weather/Ocean Modeling CTA

Continuing effort in support of CWO will focus on the following:

* Continue to study the physics of the wave-current interactions.

* Examine the dynamical interactions among the wave, current and sediment transport.

* Couple WAM-p/CH3D-p test and apply to Lake Michigan.

* Parallelize the SED module.

* WAM/CH3D/SED Coupling in parallel form.

* Implement a mesoscale atmospheric modeling system (MM5) at CEWES MSRC.

------------------------------------------------------------------------------

EQM: Environmental Quality Modeling CTA

Continuing effort in support of EQM will focus on the following:

1. Coupling of Hydrodynamics and Water Quality Models

One of the major concerns of EQM CEWES MSRC users is in the coupling of simulators. For example, at CEWES, there are at least three different hydrodynamics codes which could in theory be coupled to the water quality model CE-QUAL-ICM. A long-term goal is to take the three-dimensional flow field from any hydrodynamic model and project it onto an arbitrary water quality model grid. Realizing such a goal is a major effort.

Some of the difficulties encountered in such an arbitrary coupling are that the grids used for hydrodynamics and water quality could be very different, in particular, one could be triangle based and one quadrilateral based. Furthermore, the velocities interpolated onto the water quality grid may not be ``conservative,'' which would lead to severe mass balance errors in the water quality model.

As a start toward realizing the goal above we plan to investigate the coupling of ADCIRC and CE-QUAL-ICM. The difficulties mentioned above are present here: ADCIRC is based on triangles, and ICM is based on quadrilaterals. Moreover, the velocities produced by ADCIRC are not conservative, i.e., they do not satisfy the primitive continuity equations element-by-element. Thus, the coupling between the two codes will involve converting ADCIRC geometry files into ICM input files, converting ADCIRC velocity output into ICM input format, and the use of a projection method for correcting the ADCIRC velocities so that they are mass-conservative. Such projection methods have already been developed at UT but not applied specifically to this problem.

During the coming year we plan to develop and test the coupling of these two codes in a two-dimensional setting, in both serial and multi-processor modes. CEWES personnel will be involved in the testing and validation processes. If the two-dimensional coupling is successful, we will then proceed to full three-dimensional models (pending the testing of three-dimensional capability in ADCIRC).

In order to make ICM functional on more arbitrary grids, the algorithms within ICM need to be extended to triangular (prisms in 3D) type elements. Currently, the higher-order transport scheme employed in ICM works on rectangular grids, but cannot be directly extended to triangles. Therefore another long-term goal is examining the possibility of a higher-order scheme for ICM that is appropriate for triangular or other arbitrary shaped elements.

2. Web-based Launching of Parallel Groundwater Simulations

The focus of this effort is to continue the development of web-based tools which will serve as prototypes for launching parallel simulations from remote environments. The parallel simulations of interest arise in subsurface flow and transport, but the tools to be developed will be of general use. ParSSim (Parallel Aquifer and Reservoir Simulator), a parallel, three-dimensional, flow and transport simulator for modeling contamination and remediation of soils and aquifers will be used in this demonstration. The code was developed at the University of Texas and contains many of the features of current state-of-the-art groundwater codes. It is fully parallelized using domain decomposition and MPI and is operational on the IBM-SP2 and Cray T3E platforms. A client Java applet with a GUI (graphical user interface) will be developed that will allow remote users to access the ParSSim code and data domains on CEWES MSRC servers. The results of the computation are then saved on the CEWES MSRC local disks and also selectively sent back to the requesting Java applet. The Java applet can be instantiated from any internet web browser. We will first develop such tools appropriate for executing ParSSim on a single computing environment. If this proves successful, we will further investigate more complex programming tools such as Globus, which provides the infrastructure needed to execute in a metacomputing environment.

3. Parallel Visualization Capability for Hydrodynamic Flow and Water Quality Models

Visualization capability which allows the user to view solutions as they are being generated on a parallel computing platform can greatly increase CEWES MSRC user productivity through ease of debugging, ability to quickly modify input parameters, etc. VisTool, a client/server based parallel visualization tool developed by Prof Chandra Bajaj of UT Austin, is publicly available and provides the necessary software infrastructure. VisTool visualization libraries (isocontouring, volume rendering, streamlines, error-bounded mesh decimation, etc.) are callable from Fortran and C codes, and the server side has been demonstrated on Intel Paragon and Cray parallel machines. The client side relies on OPENGL and VTK Library calls to render the application data generated by the simulation programs. Any OPENGL capable machine (e.g., SGI, PC with OPENGL) can be used as the client machine. Communications through client and server are performed through PVM-like communication calls. VisTool supports structured, unstructured and mixed grids, and allows both scalar and vector data to be visualized. 2D cutting planes, 1D line probes, streamlines, tracers, ribbons and surface tufts are available, and mpeg-format animations or postscript files can be generated for presentations. We will develop the appropriate interface routines which will be necessary to visualize flow and transport solution output generated, for example, by the ParSSim code.

------------------------------------------------------------------------------

FMS: Forces Modeling and Simulation/C4 CTA

Trends in both the military modeling and simulation (M&S) community and in the commodity distributed computing community point to the increasing convergence in the next few years of the DMSO-mandated base M&S technologies and commodity approaches involving Java, CORBA, and related tools. To highlight this convergence, NPAC researchers are currently implementing HLA's Real-Time Infrastructure component in Java and CORBA. This commodity-based "Object Web RTI" system will then be capable of supporting distributed execution of simulation applications which are compliant with HLA, at the same time offering the possibility to take advantage of all the capabilities of the rapidly advancing field of commodity web technologies. At the same time, the FMS support team is also investigating the Comprehensive Mine Simulator (CMS), from Steve Bishop's group at the US Army's Night Vision Directorate, Ft. Belvoir, Virginia. CMS is currently capable of handling 30,000-50,000 mines on a single processor workstation, but clearly requires and HPC system to reach the target of 1,000,000 mines.

In the near future, there will be further convergence of M&S technologies with commodity distributed computing; there is already serious discussion in the field of turning HLA into a CORBA service, for example. With this convergence, and as more FMS applications move to the HLA standard, the two aspects of the PET program's approach to the FMS field will also converge: as applications such as CMS become HLA-compliant, they can be linked into larger distributed simulations, using Object Web RTI technology to connect multiple HPCC systems together.

-----------------------------------------------------------------------------

SV: Scientific Visualization

In an effort to serve the overall user community in the long-term, a strategic 5-year SV plan was developed. This SV stategic plan provides a vision for a Visual High Performance Computing environment designed to support and enhance productivity for CEWES MSRC researchers. It also provides a framework in which to organize and prioritize activities within the CEWES MSRC PET visualization program. The SV strategic plan examines 3 components:

(a) anticipated changes in user needs over the 5-year lifetime of PET, (b) expected evolution of certain technologies over this time period, and

(c) a plan for PET efforts that will take advantage of emerging technologies to address changing user needs.

This 5-year SV strategic plan is available at http://www.ncsa.uiuc.edu/Vis/PET/Strategy.

Continuing effort in support of SV will focus on the following:

1. VisGen

The CEWES MSRC PET team's work on the VisGen tools will continue in Year 3. Extending this tool with functionality needed by Dr Cerco's EQM group at CEWES, and bringing it to the maturity needed in a production-level tool, is a high priority. Adapting the tool to also run on the CEWES MSRC ImmersaDesk will provide a cross-platform visual analysis tool, allowing the researcher to work on the platform best suited for the type of analysis needed. The ImmerasDesk version of the VisGen tool will also incorporate a speech interface and, eventually, audio output to augment or reinforce the visual representation.

2. Interactive Computation

The CEWES MSRC PET team for SV will also embark on an exploration of software designed to support interactive computation. Many researchers have expressed needs for the ability to monitor their simulation as it executes. This would allow a researcher to abort a run that appears to be on an unfruitful path, perhaps because of badly stated input conditions. In some cases, it might be appropriate for researchers to modify parameters of a running simulation. A variety of software, available from academia or government labs, exists to support these capabilities. The CEWES MSRC PET SV team will apply these strategies to various user codes, to be able to characterize the current support systems for computational monitoring and steering. In particular, we will apply these strategies to CTH and CEQUAL-IQM, furthering our support for these research communities.

3. Visualization Workshop

Now that we are fully staffed, the CEWES MSRC PET SV team in Year 3 will extend its user contact activities. We are contacting a new group of users, in Vicksburg and elsewhere, to initiate a relationship and assess their needs. We plan to organize a Visualization workshop to share information about possible solutions with significant number of CEWES MSRC users. Additionally, if areas are identified where existing solutions are inadequate, we can use that information to plan future CEWES MSRC PET activities.

4. Jackson State

The CEWES MSRC SV PET team will continue to work with our colleagues at Jackson State University, particularly Ms Milti Leonard and Mr Edgar Powell. In this relationship, we will continue to find ways to utilize their current skills and provide opportunities for skill building. Finally, we will continue our end-user training efforts, and will extend our offerings in web-based delivery of information on visualization tools.

------------------------------------------------------------------------------

SPPT: Scalable Parallel Programming Tools

The SPP Tools team's plans for the future extend our current projects, always designed with an eye toward improving the overall computing environment at the CEWES MSRC. As before, these can be divided into four categories:

(a) Working directly with users. (b) Supplying essential software. (c) Training in the use of that software. (d) Tracking and transferring technology.

A more detailed strategy for the Scalable Parallel Programming Tools technology area is available at http://www.crpc.rice.edu/DoDmod/Working-Papers/Tools.html. We now discuss the team's tactical approach.

>> [note to Joe/editors: The above URL may change in the next month. Check >> with chk@cs.rice.edu before sending this to press!]

Continuing effort in support of SPP Tools will focus on the following:

1. Working Directly with Users

The focus of our user interactions remains Dr Clay Breshears, the Rice on-site SPP Tools lead. He is the first point of contact for any user-from DoD, contractors, or PET partners-for tools-related questions. Since most parallel programming relies heavily on tools (including libraries, runtime systems, and compilers), he is a natural point of contact for many general questions about migrating codes to or developing codes on parallel machines. Important sources of user contacts for Breshears include the CEWES MSRC "help" system, his collaborations with the Code Migration Team, and his involvement in teaching courses. Breshears' work with users is augmented by visits from the other SPP Tools team members. DR Chuck Koelbel, DR Ehtesham Hayder, and Shirley Browne each visit CEWES MSRC approximately four times per year, and have plans to increase those visits based on new focused efforts.

In addition to continuing the user collaborations mentioned above, we are planning a number of new activities with users in Year 3. Perhaps most notable among these is Hayder's work on the HELIX code. This is a turbulent flow code referred to the Rice team members by the Code Migration Team. Although parallelization was working fairly well, the code suffered from below-par single-processor performance. Hayder, in consultation with other Rice University researchers, is analyzing the code for memory hierarchy pathologies and other potential problems. The CMG reports that many other codes that they examine have similar inefficiencies; while it is much too early to speculate whether the causes are similar, it is clear that we will have many targets of opportunity for applying the compiler optimizations pioneered at Rice.

2. Supplying Essential Software

Although code migration projects are helpful to the individual users of those codes, real leverage to build up an HPCMP-wide programming environment comes from supplying more generally applicable software. We will continue working closely with MSRC staff to install and evaluate potentially useful new tools. We have targeted several tools for introduction at CEWES MSRC in the next year: MPI-IO, the first portable parallel I/O interface; PETSc, a high-quality scientific library that has been used in many CFD applications; CAPTools, a semi-automatic parallelization tool that could significantly aid code migration efforts; and OpenMP, an emerging standard in shared-memory programming. In addition, we are tracking upgrades to several existing packages.

One of the most exciting Tools projects proposed for year 3 was the implementation of MPI-2 by a group led by Tony Skjellum at Mississippi State. MPI (Message Passing Interface) is an extremely successful standardization for two-sided communication between processes, for example the executable programs on two nodes of a distributed-memory machine; in effect, it is the foundation for portable parallel programming on most machines. MPI-2 extends this with I/O operations, one-sided communication, and dynamic process management. The MSU group proposed an aggressive 2-year project to supply highly-efficient implementations on two machines of interest to CEWES MSRC, with the resulting software to be phased in according to a strict schedule. If this focused effort is funded, it will be a great step forward for the MPI community, both within and without DoD.

Although larger problems than ever before can be solved on today's scalable parallel computers, DoD users need to solve even larger problems and to couple independently developed portions of a problem running on different computer systems. This motivates an investigation of intercommunication and metacomputing technologies such as MPI-Connect, NetSolve, and SNIPE being developed at UTK, as well as the Globus and Legion systems. Specific requests by application areas for MPI intercommunication between all MSRC platforms warrants the effort to port MPI-Connect to these platforms, as well as to develop additional capabilities, such as parallel I/O for virtual file sharing.

It should also be noted that we plan to enlarge the populations of existing software repositories and add new, domain-specific repositories. Shirley Browne of UTK is spearheading this effort. The importance of these repositories is that they give a single source for users to go to obtain high-quality code, rather than continually reinventing the wheel (or, worse yet, failing to reinvent it and struggling with square wheels). This advantage has led the National Computational Science Alliance to adopt Repository in a Box (RIB) for their deployment mission.

3. Training

We will continue to expand the training courses offered, both by repeating popular courses (e.g. Parallel Performance Tools) and developing new ones. Some specific training plans for the next year include:

(a) Involvement in the JSU Summer Institute (Koelbel, Breshears) (b) Experiences Porting Scientific Codes (Hayder) (c) OpenMP (Goff)

Some of these are dependent on focused efforts that, as of this writing, have not been formally approved.

4. Tracking and Transferring Technology

We will continue our efforts with standards efforts in the tools area, including emerging efforts like the Paradyn/DAIS group which has recently formed. Limitations of trace-based, post-mortem performance analysis tools have been demonstrated by attempts to use them "out-of-the-box" with large-scale DoD applications. A common platform-independent infrastructure for runtime attachment and monitoring would benefit not only performance analysis, but also interactive debugging and data visualization and would ease the task of tool writers. Such an infrastructure is being developed by the Paradyn research group at the University of Wisconsin and University of Maryland and by an IBM parallel tools team led by Douglas Pase. The infrastructure consists of building a client-server system called Dynamic Application Instrumentation System (DAIS) on top of the low-level dyninst instrumentation library used by Paradyn. During the next year, we plan to continue to participate in the dyninst/DAIS standardization effort and to help focus that effort by driving it with end-user tools of importance to DoD users.

We will continue to be active in the national HPC community, both to bring new, promising technologies into CEWES MSRC and to present our progress to our peers. A key part of this, as mentioned above, is attendance at conferences and professional meetings, with extensive trip reports to update DoD on the new technologies found there. Foremost among these is the PTOOLS Annual Meeting. PTOOLS is a consortium of users and developers who work together to build new, useful parallel tools; one of their projects was to start the HPDF. We also plan trips to SC'98, SIAM Parallel Processing, and other parallel processing meetings during the year with a heavy software emphasis. Finally, Rice University is hosting the 1998 DoD High Performance Computing Users Group conference, where we will surely see many advances in the field.

------------------------------------------------------------------------------

C/C: Collaboration/Communication

no NPAC contribution so far Continuing C/C core support efforts at the CEWES MSRC PET will focus on increased user outreach and further development of the website infrastructure to improve information dissemination. Focused efforts are planned to provide web-based training support and an intranet environment to support team collaboration. Planned outreach activities include interaction with the CEWES MSRC user community through user surveys and face-to-face meetings. Information on C/C activities and technologies will be provided through updates to the C/C webpages and seminars on technologies as appropriate. The webmaster sense of community shall also be fostered through additional face-to-face meetings as well as online meetings.

Website development efforts will involve the redesign of the CEWES MSRC PET web site framework to improve the overall uniformity and usability of the sites across MSRCs. New web technologies will be implemented where appropriate to provide state-of-the-art capabilities to users and content providers. Guidance and training to PET web content providers will be developed to ensure consistency across sites. Also, as required, tools such as an MSRC PET graphics repository will be developed to assist in the development of PET webpages.

Training support will be provided by assisting CTA leads in selecting and utilizing new asynchronous training technologies to enable them to deliver web-based training courses to the HPC community. These self-paced courses can be taken on an anytime, anyplace, anypace basis as determined by the user. This effort involves selection of state-of-the-art asynchronous training tools for use in developing multimedia short courses and assistance in developing courses utilizing the recommended tools. This assistance will include development of course materials to demonstrate capability, training on how to use the tools, and guidelines on course development using web based training tools.

Collaboration support will be provided through the development of a fully integrated environment which will be implemented incrementally by providing increasing capability as collaborative tools are identified which meet user needs. Initial capabilities that will be provided are a fully functional calendar and/or scheduling capability, an improved threaded discussion capability which meets specific CEWES MSRC requirements, and meeting support to include whiteboard, chat, and audio conferencing. Long-term capabilities will include evaluation and improvement of the current videoconferencing capability at CEWES MSRC, implementation of a database backend to support user customization of the collaborative environment, and development of a web-based front end to support easy and intuitive access to all collaborative capabilities provided. Collaborative activities within the CEWES MSRC PET user community will be identified as test areas for deployment of these tools.

------------------------------------------------------------------------------