Given by Don Leskiw at Trip to China(Beijing,Harbin) by Don Leskiw on June23-July 5,1995. Foils prepared July 2,1995
Abstract * Foil Index for this file
Secs 36
What is NPAC? |
HPCC
|
The HPCCI and NII, Grand and National Challenges |
The major part of talk consists of images and text illustrating Federal HPCC Program downloaded from 1996 Bluebook WebSite |
This table of Contents Abstract
What is NPAC? |
HPCC
|
Technical Topics (Opportunities for Collaboration)
|
Northeast Parallel Architectures Center of Syracuse University |
Directed by Geoffrey C. Fox
|
Over 25 Professional and Support Staff Plus
|
Computing Facilities |
New York State $1,700K 36.8%
|
Industry Projects $430K 9.3% |
Federal Government $2,290K 49.5%
|
Syracuse University $200K 4.3% |
Computer Science and Computer Engineering |
Physics Department |
Mechanical and Aerospace Engineering |
Environmental Engineering |
Chemistry (and other Science and Engineering Departments) |
School of Information Studies (IST) |
School of Education |
Newhouse School of Public Communications and University Electronic Media Services Group |
Maxwell School of Citizenship and Public Affairs |
Management School |
Applied Parallel Technologies, $2M from NIST |
Sonnet, from ARPA |
Syracuse Research Corporation, from Air Force |
Multi disciplinary Analysis and Design Industrial Consortium (MADIC), $1M from NASA |
Portland Group (licenses NPAC Fortran 90D technology, from ARPA |
The Ultra Corporation, $2M from US DoD |
Abrams/Gentile Entertainment, Inc. |
Applied Parallel Technologies |
Booz, Allen & Hamilton, Inc. |
Center for Research on Parallel Computation (CRPC):
|
Central New York Technology Development Organization |
Columbia University |
Communigration, Inc. |
Computer Applications and Software Engineering (CASE) Center |
MasPar Computer Corporation |
Microelectronics and Computer Technology Consortium (MCC) |
Mohawk Valley Applied Technology Commission |
New York City Partnership, Inc. |
New York Photonics Development Corporation |
Niagara Mohawk Power Corporation |
Northeast Parallel Architectures Center (NPAC) |
NYNEX |
NYSERnet |
Oracle Corporation |
Par Technology Corporation |
Core Technologies R&D |
Computing and Infrastructure Facilities O&M |
Computational Science Research |
Computational Science Education |
Computer Science |
HPCC Technology Transfer and Commercialization |
What is NPAC? |
HPCC
|
Technical Topics (Opportunities for Collaboration)
|
What Is HPCC? |
What Is It Used For? |
Some History (Fox at Caltech/JPL)
|
The Federally Funded HPCC and NII Initiatives |
Where Is It Going? (The Business Outlook) |
What Is "High Performance"
|
All Computing Is Constantly Improving
|
High Energy Physics |
Semiconductor Industry, VLSI Design |
Graphics and Virtual Reality |
Weather and Ocean Modeling |
Visualization |
Oil Industry |
Automobile Industry |
Chemicals and Pharmaceuticals Industry |
Financial Applications |
Business Applications |
Airline Industry |
Before 1980: Illiac IV, ICL DAP, MPP
|
Early 1980s: HEP, Cray X-MP/22, NYU UltraComputer (and IBM RP3) |
1983: The Birth of the Hypercube:
|
1983 - First 64-node Mark I Hypercube operational at CIT as collaboration between Seitz & Fox (CrOS) |
1984 - JPL joins campus collaboration; designs and builds 32-node Mark II Hypercube |
1985 - 128-node Mark II operational
|
1986 - Mark III operational (~10x performance of Mark II) |
1987 - Strategic Defense Initiative applications and simulations (Mercury and Centuar OS)
|
1988 -128-node > 1 gigaflop computer (Mark IIIfp) |
Mid-1980s: Sequent and Encore
|
Late-1980s:
|
Early-1990s
|
What is NPAC? |
HPCC
|
Technical Topics (Opportunities)
|
High Performance Computing Act of 1991 |
Computational performance of one trillion operations per second on a wide range of important applications |
Development of associated system software, tools, and improved algorithms |
A national research network capable of one billion bits per second |
Sufficient production of PhDs in computational science and engineering |
1992: Grand Challenges |
1993: Grand Challenges |
1994: Toward a National Information Infrastructure |
1995: Technology for the National Information Infrastructure |
1996: Foundation for America's Information Future |
Top row left to right:
|
Middle row left to right:
|
Bottom row left to right:
|
Background:
|
High Performance Communications |
High Performance Computing Systems |
Advanced Software Technologies |
Technologies for Information Infrastructure |
High Performance Computing Research Facilities |
Grand Challenge Applications |
National Challenge Applications |
Basic Research and Human Resources |
Internetworking R&D
|
Gigabit-Speed Networking |
Wireless Technologies |
Network Integrated Computing |
Enhanced Internet Connectivity |
Microsystems |
Embedded Systems |
Networks of Workstations (NOW) |
Rapid Prototyping Facility |
Specialized Very High Performance Architectures |
Mass Storage |
A wafer containing multiple PIM (Processor-in-Memory) chips, each with 128kb of memory and 64 processors. 0.25M of these processors with memory have passed initial testing in a single Cray-3 quadrant as part of the Cray-3/SSS (Super Scalable System), a joint venture between NSA and Cray Computer Corporation. |
Components of a prototype superconductive crossbar switch being developed at NSA. Data are transferred (via ribbon cable) from room temperature to cryogenic temperatures and back to room temperature at 2.5 Gb/s. A full 128-by-128 configuration is intended for use as a switch for massively parallel computer memory data transfers. |
Software Systems |
Scalable I/O |
Programming Languages and Compilers |
Computational Techniques |
Performance Measurement |
Snapshot of the dynamic patterns of read behavior in a parallel version of software to calculate electron-molecule cross-sections using a 128-processor Intel Paragon at Caltech's Concurrent Supercomputing Consortium. The axes are file open duration, file seek duration, and file read duration. The locations of the octahedra are the current values of each processor's performance metric. History ribbons show the last N positions for three select octahedra (red is most recent). The Pablo software was used to produce this image. |
New techniques adaptively refine, de-refine, and partition meshes to accurately model rapidly changing solutions such as those that arise in simulating layered high temperature superconductors. |
Input molecule for the CHARMM molecular dynamics software that has been parallelized on multiple systems using the CHAOS runtime library. Key portions of CHARMM have been automatically parallelized using an enhanced version of the Fortran D compiler. |
This hyperbolic scrollable display that maps the three- dimensional toroidal Cray Research T3D network onto the two- dimensional workstation screen without false crossings was developed at the Supercomputing Research Center. |
Application of Vis5D to EPA's Regional Acid Deposition Model shows transparent volume rendering of sulfur dioxide (the red fog) and a horizontal slice with iso-lines of nitric acid over a topographic map of the Eastern U.S. The icons on the left give the user interactive control over the three- dimensional images as they are animated. Vis5D makes this interactive exploration possible by compressing data sets to fit in workstation memories. Vis5D has been used for experiments over the Blanca Gigabit Testbed and has been adapted to run in the virtual reality CAVE (described above); it is freely available over the Internet. |
Information Infrastructure services Technologies (HORUS, SLIDS, Adaptive Communications, TRAVLER, Wide-Area File System, FICUS, Nile Project, Video Conferencing) |
World Wide Web (WWW) and NCSA Mosaic |
Security and Privacy |
Information Infrastructure Applications Technologies |
NSF Supercomputing Centers |
NSF Science and Technology Centers |
NASA Testbeds |
DOE Laboratories |
NIH Systems |
NOAA Laboratories |
EPA Systems |
Applied Fluid Dynamics |
Meso- to Macro-Scale Environmental Modeling |
Ecosystem Simulations |
Biomedical Imaging and Biomechanics |
Molecular Biology |
Molecular design and Process Optimization |
Cognition |
Fundamental Computational sciences |
Grand-Challenge-Scale Applications |
Computational Aeroscience |
Coupled Field Problems and GAFD (Geophysical and Astrophysical Fluid Dynamics) Turbulence |
Combustion Modeling: Adaptive Grid Methods |
Oil Reservoir Modeling: Parallel Algorithms for Modeling Flow in Permeable Media |
Numerical Tokamak Project (NTP) |
Analysis to define the flow physics involved in compressor stall. It suggested a variety of approaches to improve the performance of compression systems, while providing increased stall margins. A Cray Research C-90, IBM SP-1, and IBM workstation cluster were used to formulate and develop this model. |
An image from a video illustrating the flutter analysis of a FALCON jet under a sequence of transonic speed maneuvers. Areas of high stress are red; areas of low stress are blue. |
Fuel flow around the stagnation plate in a pulse combustor. A burning cycle drives a resonant pressure wave, which in turn enhances the rate of combustion, resulting in a self- sustaining, large-scale oscillation. The figure shows the injection phase when the pressure in the combustion chamber is low. Fuel enters the chamber, hits the stagnation plate and becomes entrained by a vortex ring formed by flow separation at the edge of the splash plate. Researchers are developing computational models to study the interplay of vortex dynamics and chemical kinetics and will use their results to improve pulse combustor design. |
Particle trajectories and electrostatic potentials from a three- dimensional implicit tokamak plasma simulation employing adaptive mesh techniques. The boundary is aligned with the magnetic field that shears around the torus. The strip in the torus is aligned with the local magnetic field and is color mapped with the local electrostatic potential. The yellow trajectory is the gyrating orbit of a single ion. |
Massively Parallel Atmospheric Modeling Projects |
Parallel Ocean Modeling |
Mathematical Modeling of Air Pollution Dynamics |
A Distributed Computational System for Large Scale Environmental Modeling |
Cross-Media (Air and Water) Linkage |
Adaptive Coordination of Predictive Models with Experimental Data |
Global Climate Modeling |
Four-Dimensional Data Assimilation for Massive Earth System Data Analysis |
Ozone concentrations for the California South Coast Air Basin predicted by the Caltech research model show a large region in which the national ozone standard of 120 parts per billion (ppb) are exceeded. Measurement data corroborate these predictions. Scientific studies have shown that human exposure to ozone concentrations at or above the standard can impair lung functions in people with respiratory problems and can cause chest pain and shortness of breath even in the healthy population. This problem raises concern since more than 30 urban areas across the country still do not meet the national standard. |
Ozone concentrations for the California South Coast Air Basin predicted by the Caltech research model show a large region in which the national ozone standard of 120 parts per billion (ppb) are exceeded. Measurement data corroborate these predictions. Scientific studies have shown that human exposure to ozone concentrations at or above the standard can impair lung functions in people with respiratory problems and can cause chest pain and shortness of breath even in the healthy population. This problem raises concern since more than 30 urban areas across the country still do not meet the national standard. |
(1) Dissolved oxygen in Chesapeake Bay, (2) nitrate loading in the Potomac Basin, and (3) atmospheric nitric acid and wet deposition across the Eastern U.S. Three air and water models are linked together for cross-media modeling of the Chesapeake Bay. Atmospheric nitrogen deposition predicted by the atmospheric model (right) is the input load to the watershed model and the three- dimensional Bay model. The watershed model (lower left) delivers nitrate loads from each of the water basins to the three- dimensional Bay model (upper left). |
The colored plane floating above the block represents the simulated atmospheric temperature change at the earth's surface, assuming a steady one percent per year increase in atmospheric carbon dioxide to the time of doubled carbon dioxide. The surfaces in the ocean show the depths of the 1.0 and 0.2 degree (Celsius) temperature changes. The Southern Hemisphere shows much less surface warming than the Northern Hemisphere. This is caused primarily by the cooling effects of deep vertical mixing in the oceans south of 45 degrees South latitude. Coupled ocean-atmosphere climate models such as this one from NOAA/GFDL help improve scientific understanding of potential climate change. |
A scientist uses NASA's virtual reality modeling resources to explore the Earth's atmosphere as part of the Earth and Space Science Grand Challenge. |
Environmental Chemistry |
Groundwater Transport and Remediation |
Earthquake Ground Motion Modeling in Large Basins: The Quake Project |
High Performance Computing for Land Cover Dynamics |
Massively Parallel Simulations of Large-Scale, High- Resolution Ecosystme Models |
The 38-atom carbonate system on the left illustrates the most advanced modeling capability at the beginning of the HPCC Program; the 389-atom zeolite system on the right was produced by a recent simulation. Computational complexity effectively grows as the cube of the number of atoms, implying a thousand fold increase in computational power between the two images. |
The upper image shows a computational model of a valley that has been automatically partitioned for solution on a parallel computing system, one processor to a color. The lower image shows the response of the valley as a function of frequency and position within the valley. It is well known that the response of a building to an earthquake is greatest when the frequency of the ground motion is close to the natural frequency of the building itself. These results show that damage can vary considerably depending on building location and frequency characteristics. Obtaining this kind of information for large basins such as the Greater Los Angeles Basin requires high performance computing. |
This figure encodes the proportions of desert, grass, and forest within each pixel of a satellite image using color mixing. The Grand Challenge result, on the left, was produced using a new parallel algorithm and is a much more accurate estimate of mixture proportions than the least squares algorithm traditionally employed by environmental scientists. |
Visible Human Project |
Reconstruction of Positron Emission Tomography (PET) Images |
Image Processing of Electron Micrographs |
Understanding Human Joint Mechanisms |
Three-dimensional reconstruction of large icosahedral viruses. Shown are images of herpes simplex virus type 1 capsids, which illustrate the potential of new parallel computing methods. They show the location of a minor capsid protein called VP26 as mapped in experiments in which VP26 was first extracted from purified capsids by treatment with guanidine hydrochloride and then rebound to the capsids. The right half of the top image shows the depleted capsid and the rebound VP26 capsid, and the left half shows the three- dimensional reconstruction, as it would be obtained with a conventional sequential computer. Parallel computing extended the analysis to obtain the lower images, which improved the signal-to- noise ratio and the resolution from approximately 3.5 to under 3.0 nanometers. The clusters of six VP26 subunits, shown together in the top image, are clearly resolved in the bottom image. This work was conducted at NIH in collaboration with the University of Virginia. |
Protein and Nucleic Sequence Analysis |
Protein Folding Prediction |
Ribonucleic Acid (RNA) Structure Predition |
Biological Applications of Quantum Chemistry |
Biomolecular Design |
Biomolecular Modeling and Structure Determination |
Computational Structural Biology |
Biological Methods for Enzyme Catalysis |
Electrostatic field, shown in yellow, of the acetylcholinesterase enzyme. The known active site is shown in blue; the second 'back door' to the active site is thought be at the spot where the field lines extend toward the top of the picture. |
A portion of the Glucocorticoid Receptor bound to DNA; the receptor helps to regulate expression of the genetic code. |
The upper figure shows the known structure of the protein crambin from the Brookhaven Protein Data Base (PDB), and the lower figure is the best selection from a large ensemble of candidate chains, generated on a fcc (face-centered cubic) lattice using a guided replication Monte Carlo chain generation algorithm. Development of the algorithm and its serial and parallel implementations was funded by the HPCC Program. The three-dimensional structure prediction procedure was benchmarked at about 6 minutes on a 500- node Intel Paragon versus 24 hours on a single-processor IBM RS6000 workstation, a 225-fold speedup. |
Graphical representation of the bovine pancreatic ribonuclease enzyme. Many high-resolution X-ray structures are available for this enzyme, which makes it an ideal candidate for verifying new modeling methods. |
HPC for Learning |
A New View of Cognition |
The central image is the original camera shot and the surrounding images were generated from the original using image synthesisanalysis. |
Quantum Chromodynamics |
High Capacity Atomic-Level Simulations for the Design of Materials |
First Principals Simulation of Materials Properties |
Black Hole Binaries: Coalescence and Gravitational Radiation |
Scalable Hierarchical Particle Algorithms for Galzy Formation and Accretion Astrophysics |
Radio Synthesis Imaging |
Large Scale Structure and Galaxy Formation |
Simulation of gravitational clustering of dark matter. This detail shows one sixth of the volume computed in a cosmological simulation involving 16 million highly clustered particles that required load balancing on a massively parallel computing system. Many particles are required to resolve the formation of individual galaxy halos seen here as red/white spots. |
Simulation of Chorismate Mutase |
Simulation of Antibody-Antigen Association |
A Realistic Ocean Model |
Drag Control |
The Impact of Turbulence on Weather/Climate Prediction |
Shoemaker-Levy 9 Collision with Jupiter |
Vortex structure and Dynamics in Superconductors |
Molecular Dynamics Modeling |
Crash Simulation |
Advanced Simulation of Chemically Reacting Flows |
The complex between the fragment of a monoclonal antibody, HyHEL- 5, and hen-egg lysozyme. The key amino acid residues involved in complexation are displayed by large spheres. The negatively charged amino acids are in red and the positively charged ones in blue. The small spheres highlight other charged residues in the antibody fragment and hen-egg lysozyme. |
Simulation of circulation in the North Atlantic. Color shows temperature, red corresponding to high temperature. In most prior modeling, the Gulf Stream turns left past Cape Hatteras, clinging to the continental shoreline. In this simulation, however, the Gulf Stream veers off from Cape Hatteras on a northeast course into the open Atlantic, following essentially the correct course. |
Simulations on SDSC's Intel Paragon of turbulence over surfaces mounted with streamwise riblets. Computed turbulence intensities indicate that the reduction of fluctuations near the wall with riblets (bottom) results in a six percent drag reduction in this geometry. |
This image is a single frame from a volume visualization rendered from a computer model of turbulent fluid flow. The color masses indicate areas of vorticity that have stabilized within the volume after a specified period of time. The colors correspond to potential vorticity, with large positive values being blue, large negative values being red, and values near zero being transparent. |
Impact of the comet fragment. Image height corresponds to 1,000 kilometers. Color represents temperature, ranging from tens of thousands of degrees Kelvin (red), several times the temperature of the sun, to hundreds of degrees Kelvin (blue). |
Early stages in the formation of a magnetic flux vortex. The figure shows the penetration of a magnetic field into a thin strip of high- Tc superconducting material, which is imbedded in a normal metal, and the formation of a magnetic flux vortex. The red surface is an isosurface for the magnetic induction. The isosurface follows the top and bottom of the superconducting strip (not shown). The field penetrates from the left and right sides. Thermal fluctuations cause "droplets" of magnetic flux to be formed in the interior of the strip. As time progresses, these droplets may coalesce into vortices. One vortex is being spawned from the left sheet of the isosurface. These computations were done on Argonne's IBM SP system. |
MD simulation of a crystal block of 5 million silicon atoms as 11 silicon atoms implanted, each with an energy of 15keV. The simulation exhibits realistic phenomena such as amorphization near the surface and the channeling of some impacting atoms. These snapshots show the atoms displaced from their crystal positions (damaged areas) and the top layer (displayed in gray) at times 92 and 277 femtoseconds (10-15 seconds) after the first impact. |
Illustrative of the computing power at the Center for Computational Science is the 50 percent offset crash of two Ford Taurus cars moving at 35 mph shown here. The Taurus model is detailed; the results are useful in understanding crash dynamics and their consequences. These results were obtained using parallel DYNA-3D software developed at Oak Ridge. Run times of less than one hour on the most powerful machine are expected. |
View of fluid streamlines and the center plane temperature distribution in a vertical disk, chemical vapor deposition reactor. Simulations such as these allow designers to produce higher uniformity semiconductor materials by eliminating unwanted detrimental effects such as fluid recirculation. |
NASA simulation of temperature fluctuations (dark: cool; light: hot) in a layer of convectively unstable gas (upper half) overlying a convectively stable layer (lower half) within the deep interior of a Sun-like star. This simulation was performed on the Argonne IBM SP-1. |
Digital Libraries |
Public Access to Government Information |
Electronic Commerce |
Civil Infrastructure |
Education and Lifelong Learning |
Energy Management |
Environmental Monitoring |
Health Care |
Maunfacturing Processes and Products |
Joint Digital Libraries rresearch Initiative |
Digital Library Technology Products |
Satellite Weather data Dissemination |
Environmental Decision Support |
Computer Science Technical reports Testbeds |
Unified Medical Language system (UMLS) |
CALS Library |
Earth Data |
Education |
Health Care Data |
Computer-Based Patient Records (CBPR) |
Radiation Treatment Planning |
Functional Neurological Image Analysis |
Project Hippocrates: HIgh PerfOrmance Computing for Robot- AssisTEd Surgery |
Prototypes for Clinic-Based Collaboration |
Trusted Interoperation of Health Care information Systems |
Collaboratory for Microscpoic Digital Anatomy (CMDA) |
Distributed Imaging Over Gigabit Networks |
A source image slice with a beam placed and some contours drawn. The contours denote regions of different density and are subsequently used in the radiation dose calculation in place of the source image. The beam specifies the path of the central ray, width, placement, and the presence of a blocking wedge. |
Single slices of MRI scans of two normal children of different ages. The leftmost scan is warped to have the form of the middle scan using the tie-points identified by the squares. The warped image is shown at right. This work was conducted at NIH's National Institute of Mental Health. |
This Gridbrowser interface shows (1) a low magnification survey with gridlines identifying the source of the higher magnification view, (2) cross-hairs identifying the current position of the microscope stage (which can be changed remotely), and (3) a red- green stereo view of the three- dimensional volume derived from acquired data. |
An example of the types of user interfaces required to visualize data on manufacturing activities in a production facility. A prototype facility was simulated to provide for real-time views into the factory control system database and to simulate manufacturing data access by multiple users. |
Observations:
|
HPCCI agencies introduced agendas |
NII crept up on HPC |
WWW took everything by storm |
HPCC program may now be unmanagable, future of "high end" is uncertain |
Software tools: always the critical issue |
What is NPAC? |
HPCC
|
Technical Topics (Opportunities for Collaboration)
|