STC Faculty Members:
Geoffrey Fox, Project Director
For a decade, For a decade, CRPC research thrust areas have applied emerging technologies in practical proving grounds like Grand Challenge projects, industrial applications, educational applications, and projects with other institutional or agency partners. CRPC involvement in Grand Challenges has run its course, yet involvement in applications is increasing with NSF PACI partners, DARPA, ASCI Centers of Excellence, the DOD modernization program, a DOE/AFOSR/NSF/Boeing collaboration, and other government agencies and companies. Further, the NSF's high-performance connection grants are expediting the applications of new technologies by researchers at several CRPC sites. These efforts will continue after the CRPC closes.

On a national scale, CRPC Director Ken Kennedy's Co-chairmanship of the Presidential Information Technology Advisory Committee (PITAC) has supported the future development of important national applications initiatives. The final PITAC Report to the President, released in February 1999, concluded that the government was under-investing in long-term IT research relative to its importance to the nation. The committee, comprised of leaders from industry and academia, concluded that the private sector was unlikely to invest in the long-term, fundamental research needed to sustain the Information Revolution. Largely a result of such recommendations, President Clinton and Vice President Gore proposed a $366 million, 28% increase, in the government's investment in information technology research known as IT2, Information Technology for the Twenty-first Century.

Potential breakthroughs that may be possible as a result of IT2 include: (1) Computers that can speak, listen, and understand human language; are much easier to use; and accurately translate between languages in real-time; (2) ``Intelligent agents'' that can roam the Internet on our behalf, retrieving and summarizing the information we seek in a vast ocean of data; (3) A wide range of scientific and technological discoveries made possible by simulations running on supercomputers, accessible to researchers all over the country; (4) Networks that can grow to connect not only tens of millions of computers, but hundreds of billions of devices; (5) Computers that are thousands of times faster than today's supercomputers, or are based on fundamentally different technology, such as biological or quantum computing; and (6) New ways of developing complex software that are more reliable, easier to maintain, and more dependable for running the phone system, the electric power grid, financial markets, and other core elements of our infrastructure. (The PITAC Final Report can be found at http://www.ccic.gov/ac/report/. For more information on IT2, see http://www.whitehouse.gov/WH/EOP/OSTP/html/it2.html.)

Last year, the CRPC started a project encompassing sites at Argonne National Laboratory, Syracuse University, the University of Tennessee, and the University of Texas. This project involved building an ImmersaDesk visualization front-end for two applications (PARSSIM and IPARS) from the University of Texas. PARSSIM is an aquifer or reservoir simulator for the incompressible, single-phase and reactive transport of subsurface fluids through a heterogeneous porous medium of somewhat irregular geometry. It is also capable of simulating the decay of radioactive tracers or contaminants in the subsurface, linear adsorption, wells, and bioremediation. IPARS refers to the Integrated Parallel Accurate Reservoir Simulator. The ImmersaDesk visualization of PARSSIM and IPARS applications is being integrated into the NetSolve and WebFlow technologies developed at Tennessee and Syracuse to provide a powerful problem-solving environment.

CRPC researchers are involved in Advanced Strategic Computing Initiative (ASCI) projects that range from exploration of exploding stars and the size of the universe to simulation of advanced rockets and materials. They are working on Level 1 projects at the California Institute of Technology (Caltech), the University of Chicago, and the University of Illinois. In addition, teams at Rice University and Los Alamos National Laboratory (LANL) are involved in projects that support ASCI research in many areas.

Through DOE HPCC, DOE 2000, and ASCI funding, Los Alamos National Laboratory researchers have built a C++ framework that provides high-level objects like arrays, fields, meshes, and particle lists for high-performance simulations on parallel architectures, the POOMA (Parallel Object-Oriented Methods and Applications) Framework. (See http://www.acl.lanl.gov/Pooma.)

Argonne National Laboratory (ANL) CRPC researchers are collaborating with the University of Chicago Center for Astrophysical Thermonuclear Flashes to study the physics of exploding stars and the nuclear detonations that occur when matter in space is crushed by gravity onto the surfaces of extremely dense stars. Exploding stars, or supernovae, emit 10 billion times more power than the sun and shine as brightly as an entire galaxy of stars. Learning more about supernovae will help answer important questions about the universe, including its true size.

Also at LANL, The Delphi Project, led by CRPC Site Director Andrew White, builds upon and complements accomplishments made through ASCI. The project is helping to define a new era of high-performance computing that focuses on predictive simulation of complex systems to solve time-critical problems in the national interest. One of these applications aims to achieve predictive models of the Global Climate System. Other applications include forecasting natural hazards, managing disasters, and protecting, maintaining, and improving infrastructures. (See http://www.lanl.gov/delphi/.)

CRPC researchers at Caltech, through the Center for Simulation of Dynamic Response of Materials, are constructing a virtual shock-physics facility in which the full three-dimensional response to a variety of target materials can be computed. This involves a wide range of compressive-, tensional-, and shear-loadings, including those loadings produced by detonation and energetic materials. (See http://www.cacr.caltech.edu/ASAP/.)

At the University of Illinois at Urbana Champaign, CRPC affiliated-site researchers provide rocket system designers with integrated simulation tools to evaluate various design options. Complex simulations of subscale physics are also being carried out that use many of the same software components developed for the system simulation.

Comprehensive simulation will provide a much safer and less expensive way to investigate technical issues in rocket design than traditional methods based on experimental trial and error. Improved understanding of the behavior of solid propellant rockets will also have direct benefits for closely related technologies, such as gas generators used for automobile air bags and fire suppression, as well as many other technological design problems that involve complex components and require similar levels of system integration. The computational capabilities developed to support this effort will be applicable to a wide variety of important problems in computational science and engineering, including fluid dynamics, combustion, and failure of materials. (See http://www.csar.uiuc.edu/F_info/AboutCSAR.htm.)

At Rice University, CRPC researchers target two research thrusts in ASCI projects: compilers and tools for improving memory hierarchy performance of complex application codes, and support for portable shared-memory programming on clusters of workstations and shared-memory multiprocessors. The objective of the compilers and tools project is to conduct research on programming environment technology that facilitates optimization of ASCI applications for a range of current and future teraflop-scale architectures. The research is motivated by the challenges of obtaining very high sustained performance on highly parallel systems with complex multi-level memory hierarchies. The specific focus is on software technology to support both automatic and interactive optimization of codes written using the explicitly parallel programming models used by ASCI scientists. The major compilers and tools will leverage results from previous projects at Rice, particularly ParaScope and the D System. For more information on the ASCI/ASAP program, see http://www.llnl.gov/asci-alliances.

National Partnership for Advanced Computational Infrastructure (NPACI) support has enabled the University of Tennessee to apply CRPC's NetSolve (http://www.cs.utk.edu/netsolve/) in the MCell simulation program developed at the Salk Institute. This program allows simulation of microscopic cellular processes, such as how neurotransmitters diffuse and activate receptors in synapses between different cells. The Netsolve system allows users to distribute processing workload and access computational resources across a network. MCell, with the help of NetSolve, can now distribute its heavy workload among several machines simultaneously. (See http://www.npaci.edu/online/v2.16/ under ``Research'' for more details.)

Alliance support enables CRPC researchers at Syracuse and ANL to use their HPCC commodity metacomputing approach to provide a high-level interface to Globus as it is applied to Quantum Simulation Grand Challenges, the first general-purpose commodity front end to HPCC technology. This work continues with CRPC Associate Dennis Gannon and CRPC researchers at Syracuse applying a general architecture for Web-based problem solving environments to a chemical engineering application. (See http://www.npac.syr.edu/users/haupt/WebFlow/demo.html).

CRPC researchers at Caltech have been advancing DARPA-funded Parallel C3I (Command, Control, Communication, and Intelligence) applications, DARPA-funded aircraft control systems applications, and DOE ASCI/ASAP-funded materials science simulation applications using commodity hardware and software. Using multithreaded libraries and preprocessors, they have shown that traditional supercomputing applications can be greatly simplified without sacrificing performance, by taking full advantage of shared-memory and lightweight threads. Important developments include the demonstration of the applicability of commodity multiprocessors to large-scale simulations that were previously only feasible on expensive supercomputers. The researchers are now expanding their programming system to support C++ and possibly Java implementations. (See http://threads.cs.caltech.edu/threads/.)

The DOD has sponsored the recently completed Rice/University of Tennessee project, ScaLAPACK, a library for dense linear algebra on distributed memory machines. ScaLAPACK has been used to achieve a significant improvement in performance for the Global and Basin-Scale Ocean Modeling and Prediction Challenge Project described at http://www.hpcm.dren.net/Htdocs/Challenge/FY98/index.html. OCEANS is an ocean circulation model that employs second-order finite differences with an explicit leapfrog time-stepping scheme on a logically rectangular grid. The OCEANS preprocessor constructs and inverts large systems of equations that represent boundary conditions for the OCEANS program. Use of ScaLAPACK routines resulted in a factor of three-to-six speed up over the previous version of the code.

CRPC researchers at Tennessee are also collaborating with DOD application scientists on a solution using iterative methods that is expected to achieve an efficient parallel implementation of the DOD HPC CHSSI CEA-3 Spectral Domain LO Component Design Code. The application requires solution of a sparse, complex, symmetric linear system. Although library software does not currently exist for this case, sufficient research has been done on the complex symmetric problem to allow implementation of effective methods. This project is expected to result in an efficient implementation of the CHSSI CEA-3 code that will require considerably less memory than the current implementation. The project is also expected to produce sparse linear system solver methods and software for the complex symmetric case that will benefit CEA and other DOD users who need to solve similar problems. (See http://www.hpcm.dren.net/Htdocs/CTAs/cea.html for more information about CEA-3. See http://www.cs.utk.edu/ ~ eijkhout/parpre.html for more information about parallel preconditioners for iterative methods.)

CRPC members at Texas have been involved in the parallel migration of the hydrodynamics code ADCIRC and the reactive transport code CE_QUAL-ICM as part of an environmental quality modeling effort for the DOD Modernization project at the ERDC (Engineering Research and Development Center MSRC). This work has enabled the ERDC to run 10-year simulations.

CRPC Researchers at Syracuse have developed application environments for Landscape Management at the ERDC, and a rich Chemistry Problem Solving Environment at ASC.

Funds for the following applications efforts at Rice come from DOE, Boeing, the NSF, and the Air Force Office of Sponsored Research (AFOSR):

The CRPC is working with a group in Boeing Commercial Airplane Group (BCAG) to reduce the cycle time for designing nozzles, the inside parts of engine housings. The current length of a design cycle is two weeks. CRPC researchers expect to reduce that to approximately one day.

The CRPC is working with Boeing in designing planforms, the shape of the wing as viewed from above. This effort addresses difficult design problems involving multiple objectives and nonlinear constraints. Tests involving standard baselines provided solutions meeting all constraints while showing economic benefits. In addition, these tests demonstrated actual integration of the analyses from the different disciplines as well as speed advantages.

The CRPC is working with Boeing on problems involved in designing helicopter rotor blades. Wake simulation is the hard part of the problem, and the effects at the tip of the rotor blade are the most difficult to simulate. It takes about 4 hours on 16 fat nodes of an SP2. The CRPC and Boeing collaborators built a new model that reduces the cost to minutes and the bandwidth of the coupling from 100s to 10s. The resulting software was the first to find the best-known simulation-based design.

In the spring, we delivered to our Boeing colleagues a beta-release of the model management framework implementation as a part of the C++ software package FOCUS (Framework for Optimization with Constraints Using Surrogates). A more flexible version became available in August and is now being used at Boeing. The implementation of the software has progressed to the point that a version is freely available on the Internet at http://www.caam.rice.edu/ ~ dougm/focus.html.

CRPC-developed Parallel Direct Search (PDS) continues to be used at Boeing for Just-in-Time manufacturing of aircraft parts.

Caltech-, LANL-, and Syracuse-based CRPC researchers are designing a computational infrastructure to enable development and testing of models for earthquakes. This provides a unique application of Parallel Fast Multipole methods originally developed for astrophysics. Syracuse has prototyped an interesting environment to support collaborative analysis of sensor data by scientific researchers after an earthquake. (See http://www.npac.syr.edu/projects/gem/.)

NASA Langley is funding a Rice-based CRPC effort to enhance the ADIFOR automatic differentiation software. The resulting ADIFOR 3.0 software has been applied to the CFL3D (Computational Fluid Dynamics 3-Dimensional) code, which is maintained and distributed by the Aerodynamics and Acoustic Methods Branch at NASA Langley. CFL3D is a Navier-Stokes flow solver for structured grids. The sensitivity-enhanced version of CFL3D is being used at Boeing and NASA. ADIFOR 3.0 has also been applied to the Euler2d hydrocode from Los Alamos, with funding from the Los Alamos Computer Science Institute.

At Rice, researchers are studying and applying new global optimization techniques for computational problems in X-ray crystallography, with the support from NSF funding.

With funding from the Pacific Northwest National Lab, CRPC researchers at Syracuse University are developing new methodology, algorithms, and tools for parallel computational chemistry. The project has achieved new records for high-quality RI-MP2 calculations and demonstrated the parallel scalability of the method. Additionally, work with the Global Array Toolkit and the Parallel Compiler Runtime Consortium libraries has lead to improvements in performance and interoperability of those tools. (See http://www.npac.syr.edu/users/bernhold/comp_chem/index.html).


File translated from TEX by TTH, version 2.33.
On 29 Jun 2000, 20:33.