Given by Geoffrey C. Fox at CPS615 Basic Simulation Track for Computational Science on Fall Semester 95. Foils prepared 29 August 1995
Outside Index
Summary of Material
Overview of National Program -- The Grand Challenges |
Overview of Technology Trends leading to petaflop performance in year 2015 |
Overview of Syracuse and National programs in computational science |
Parallel Computing in Society |
Parallel and Sequential Computer Architectures |
Why Parallel Computing works |
Message Passing and Data Parallel Programming Paradigms |
Laplace Equation with Iterative solver in detail |
Set (approximately 6) of application/algorithm snippets illustrating software, hardware and algorithm issues |
Outside Index Summary of Material
Geoffrey Fox |
NPAC |
Room 3-131 CST |
111 College Place |
Syracuse NY 13244-4100 |
Overview of National Program -- The Grand Challenges |
Overview of Technology Trends leading to petaflop performance in year 2015 |
Overview of Syracuse and National programs in computational science |
Parallel Computing in Society |
Parallel and Sequential Computer Architectures |
Why Parallel Computing works |
Message Passing and Data Parallel Programming Paradigms |
Laplace Equation with Iterative solver in detail |
Set (approximately 6) of application/algorithm snippets illustrating software, hardware and algorithm issues |
Instructor: Geoffrey Fox gcf@npac.syr.edu 3154432163 Room 3-131 CST |
Backup: Nancy McCracken njm@npac.syr.edu 3154434687 Room 3-234 CST |
TA: John Houle houle@npac.syr.edu |
NPAC Administrative support: Nora Downey-Easter nora@npac.syr.edu 3154434740 Room 3-210 CST |
CPS615 Powers that be above can be reached at cps615ad@npac.syr.edu |
CPS615 Students can be reached by mailing cps615@npac.syr.edu |
Homepage will be: |
http://www.npac.syr.edu/projects/cps615fall95 |
Graded on the basis of Approximately 8 Homeworks which will be due Wednesday of week following day given out (Monday or Wednesday) |
Plus one small project at the end of class |
No finals or written exams |
All material will be placed on World Wide Web(WWW) |
Preference given to work returned on the Web -- optional lecture will be given on how to use WWW |
Overview of National Scene -- Why is High Performance Computing Important |
What is Computational Science -- The Program at Syracuse |
Basic Technology Situation -- Increasing density of transistors on a chip |
Elementary Discussion of Parallel Computing |
Computer Architecture -- Parallel and Sequential
|
Simple base example -- Laplace's Equation |
Programming Models -- Message Passing and Data Parallel Computing -- MPI and HPF
|
This introduction is followed by a set of "vignettes" discussing applications and algorithms which illustrate parallel programming and parallel algorithms
|
RAM density increases by about a factor of 50 in 8 years |
Supercomputers in 1992 have memory sizes around 32 gigabytes (giga = 109) |
Supercomputers in year 2000 should have memory sizes around 1.5 terabytes (tera = 1012) |
Computer Performance is increasing faster than RAM density |
See Chapter 5 of Petaflops Report -- July 95 |
See Chapter 5 of Petaflops Report -- July 95 |
Different machines |
New types of computers |
New libraries |
Rewritten Applications |
Totally new fields able to use computers .... ==> Need new educational initiatives Computational Science |
Will be a nucleus for the phase transition |
and accelerate use of parallel computers in the real world |
Computational Science is an interdisciplinary field that integrates computer science and applied mathematics with a wide variety of application areas that use significant computation to solve their problems |
Includes the study of computational techniques
|
Includes the study of new algorithms, languages and models in computer science and applied mathematics required by the use of high performance computing and communications in any (?) important application
|
Includes computation of complex systems using physical analogies such as neural networks and genetic optimization. |
Formal Master's Program with reasonable curriculum and course material |
PhD called Computer and Information Science but can choose computational science research |
Certificates(Minors) in Computational Science at both the Masters and PhD Level |
Undergraduate Minors in Computational Science |
All Programs are open to both computer science and application (computer user) students |
Currently have both an "Science and Engineering Track" ("parallel computing") and an "Information oriented Track" ("the web") |
Conclusions of DOE Conference on Computational Science Education, Feb 1994 |
Industry and government laboratories want graduates with Computational Science and Engineering training - don't care what degree is called |
Universities - want graduates with Computational Science and Engineering training - want degrees to have traditional names |
Premature to have BS Computational Science and Engineering |
Master's Degree in Computational Science Course Requirements: |
Core Courses:
|
Application Area:
|
It is required to take one course in 3 out of the following 4 areas:
|
Minors in Computational Science |
Masters Level Certificate:
|
Doctoral level Certificate:
|
Doctoral level Certificate in Computational Neuroscience:
|
Example Course Module
|
CPS 713 Case Studies in Computational Science |
This course emphasizes a few applications and gives an in-depth treatment of the more advanced computing techniques, aiming for a level of sophistication representing the best techniques currently known by researchers in the field.
|
Instructor: Professor Geoffrey Fox, Computer Science and Physics |
Computer Science -- Nationally viewed as central activity
|
Computer Engineering -- Historically Mathematics and Electrical Engineering have spawned Computer Science programs -- if from electrical engineering, the field is sometimes called computer engineering |
Applied Mathematics is a very broad field in U.K. where equivalent to Theoretical Physics. In USA applied mathematics is roughly mathematics associated with fluid flow
|
Computational Physics -- Practioners will be judged by their contribtion to physics and not directly by algorithms and software innovations.
|
The conference proceedings "R and D for the NII: Technical Challenges" obtainable from EDUCOM (nii-forum@educom.com) is one useful general resource. It would be important to collect other useful general and specialized reference books for either teachers and/or students. There are currently 10 modules listed below. |
1) The Internet and Specialized Testbeds as Prototypes of the GII (Global Information Infrastructure) |
2) Physical Network |
3) The Consumer Multimedia Enterprise: Multimedia Videogames, PC's, Settop boxes, and Workstations |
4) Digital Media: Audio, Video, Graphics and Images |
5) User, Application and Service Interfaces |
6) Client and Server High Performance Multimedia Computer Requirements and Architecture |
7) Base Software and Systems Architecture of the GII |
8) Pervasive and Niche Applications for the GII |
9) Generic Services and Middleware on the GII |
10) The Emerging GII Enterprise in Industry, Academia and Society |
World Wide Web basics : HTTP,MIME, servers,clients |
PERL4 and object-oriented features in PERL5(to be finished) |
Wavelet and Other Compression Technologies |
Collaboration Technologies from MBONE to CLI |
ATM Networks with comparison with ISDN and traditional LAN |
Parallel Relational Databases and Web Integration |
Thread based Communication Environments |
Video servers and network management for good quality |
Parallel Web Servers (to be finished) |
Advanced Web Technologies -- agents, VRML, Java (to be finished) |
Joint Program set up between
|
12 3-credit courses with 3 required courses
|
Three tracks for specialization
|
Take 3 core courses, one course from each track(3), 6 elective courses with constraints to be determined |
Parallel Computing Works! |
Technology well understood for Science and Engineering
|
Supercomputing market small (few percent at best) and probably decreasing in size
|
Data Parallelism - universal form of scaling parallelism |
Functional Parallelism - Important but typically modest speedup. - Critical in multidisciplinary applications. |
On any machine architecture
|
Performance of both communication networks and computers will increase by a factor of 1000 during the 1990's
|
Competitive advantage to industries that can use either or both High Performance Computers and Communication Networks. (United States clearly ahead of Japan and Europe in these technologies.) |
There are several machines still being used but the number of vendors and diversity of designs has shrunk |
The SIMD Maspar and AMT DAP are focusing on database and other niche markets such as signal processing. They are not considered mainstream any longer. |
The MIMD nCUBE3 is not deployed and the nCUBE2 is uncompetitive in science arena. The company is focusing on video server market. |
The major pure MIMD distributed memory machines are the IBM SP-2 and Intel Paragon with the IBM having the best node CPU and the Paragon a superior network |
There is a spectrum of shared memory machines from
|
No silver programming bullet -- I doubt if new language will revolutionize parallel programmimng and make much easier
|
Social forces are tending to hinder adoption of parallel computing as most applications are areas where large scale computing already common
|
Switch from conventional to new types of technology is a phase transition |
Needs headroom (Carver Mead) which is large (factor of 10 ?) due to large new software investment |
Machines such as the nCUBE-1 and CM-2 were comparable in cost performance to conventional supercomputers
|
Cray T3D, Intel Paragon, CM-5, DECmpp (Maspar MP-2), IBM SP-2, nCUBE-3 have enough headroom to take over from traditional computers ? |
ATM networks have rapidly transitioned from research Gigabit networks to commercial deployment
|
Computer Hardware trends imply that all computers (PC's ---> Supercomputers) will be parallel by the year 2000
|
Software is challenge and could prevent/delay hardware trend that suggests parallelism will be a mainline computer architecture
|
High Energy Physics |
Semiconductor Industry, VLSI Design |
Graphics and Virtual Reality |
Weather and Ocean Modeling |
Visualization |
Oil Industry |
Automobile Industry |
Chemicals and Pharmaceuticals Industry |
Financial Applications |
Business Applications |
Airline Industry |
Before 1980: Illiac IV, ICL DAP, MPP
|
Early 1980s: HEP, Cray X-MP/22, NYU UltraComputer (and IBM RP3) |
1983: The Birth of the Hypercube:
|
1983 - First 64-node Mark I Hypercube operational at CIT as collaboration between Seitz & Fox (CrOS) |
1984 - JPL joins campus collaboration; designs and builds 32-node Mark II Hypercube |
1985 - 128-node Mark II operational
|
1986 - Mark III operational (~10x performance of Mark II) |
1987 - Strategic Defense Initiative applications and simulations (Mercury and Centuar OS)
|
1988 -128-node > 1 gigaflop computer (Mark IIIfp) |
Mid-1980s: Sequent and Encore
|
Late-1980s:
|
Early-1990s
|
Geoffrey Fox - Director |
Denny Eaton - InfoMall MidHudson |
Steve Warzala -- Manager InfoMall |
Core Technologies R&D |
Computing and Infrastructure Facilities O&M |
Computational Science Research |
Computational Science Education |
Computer Science |
HPCC Technology Transfer and Commercialization |
Originally $2.9 billion over 5 years starting in 1992 and
|
The Grand Challenges
|
Nearly all grand challenges have industrial payoff but technology transfer NOT funded by HPCCI |
High Performance Computing Act of 1991 |
Computational performance of one trillion operations per second on a wide range of important applications |
Development of associated system software, tools, and improved algorithms |
A national research network capable of one billion bits per second |
Sufficient production of PhDs in computational science and engineering |
1992: Grand Challenges |
1993: Grand Challenges |
1994: Toward a National Information Infrastructure |
1995: Technology for the National Information Infrastructure |
1996: Foundation for America's Information Future |
ATM ISDN Wireless Satellite advancing rapidly in commercial arena which is adopting research rapidly |
Social forces (deregulation in the U.S.A.) are tending to accelerate adoption of digital communication technologies
|
Not clear how to make money on Web(Internet) but growing interest/acceptance by general public
|
Integration of Communities and Opportunities
|
Technology Opportunities in Integration of High Performance Computing and Communication Systems
|
New Business opportunities linking Enterprise Information Systems to Community networks to current cable/network TV journalism |
New educational needs at interface of computer science and communications/information applications |
Major implications for education -- the Virtual University |
Executive Summary |
I. Introduction |
II. Program Accomplishments and Plan |
1. High Performance Communications
|
2. High Performance Computing Systems
|
3. Advanced Software Technologies
|
4. Technologies for the Information Infrastructure
|
5. High Performance Computing Research Facilities
|
6. Grand Challenge Applications
|
7. National Challenge Applications - Digital Libraries
|
8. Basic Research and Human Resources
|
III. HPCC Program Organization |
IV. HPCC Program Summary |
V. References |
VI. Glossary |
VII. Contacts |
Applied Fluid Dynamics |
Meso- to Macro-Scale Environmental Modeling |
Ecosystem Simulations |
Biomedical Imaging and Biomechanics |
Molecular Biology |
Molecular design and Process Optimization |
Cognition |
Fundamental Computational sciences |
Grand-Challenge-Scale Applications |
Computational Aeroscience |
Coupled Field Problems and GAFD (Geophysical and Astrophysical Fluid Dynamics) Turbulence |
Combustion Modeling: Adaptive Grid Methods |
Oil Reservoir Modeling: Parallel Algorithms for Modeling Flow in Permeable Media |
Numerical Tokamak Project (NTP) |
Analysis to define the flow physics involved in compressor stall. It suggested a variety of approaches to improve the performance of compression systems, while providing increased stall margins. A Cray Research C-90, IBM SP-1, and IBM workstation cluster were used to formulate and develop this model. |
An image from a video illustrating the flutter analysis of a FALCON jet under a sequence of transonic speed maneuvers. Areas of high stress are red; areas of low stress are blue. |
Fuel flow around the stagnation plate in a pulse combustor. A burning cycle drives a resonant pressure wave, which in turn enhances the rate of combustion, resulting in a self- sustaining, large-scale oscillation. The figure shows the injection phase when the pressure in the combustion chamber is low. Fuel enters the chamber, hits the stagnation plate and becomes entrained by a vortex ring formed by flow separation at the edge of the splash plate. Researchers are developing computational models to study the interplay of vortex dynamics and chemical kinetics and will use their results to improve pulse combustor design. |
Particle trajectories and electrostatic potentials from a three- dimensional implicit tokamak plasma simulation employing adaptive mesh techniques. The boundary is aligned with the magnetic field that shears around the torus. The strip in the torus is aligned with the local magnetic field and is color mapped with the local electrostatic potential. The yellow trajectory is the gyrating orbit of a single ion. |
Massively Parallel Atmospheric Modeling Projects |
Parallel Ocean Modeling |
Mathematical Modeling of Air Pollution Dynamics |
A Distributed Computational System for Large Scale Environmental Modeling |
Cross-Media (Air and Water) Linkage |
Adaptive Coordination of Predictive Models with Experimental Data |
Global Climate Modeling |
Four-Dimensional Data Assimilation for Massive Earth System Data Analysis |
Ozone concentrations for the California South Coast Air Basin predicted by the Caltech research model show a large region in which the national ozone standard of 120 parts per billion (ppb) are exceeded. Measurement data corroborate these predictions. Scientific studies have shown that human exposure to ozone concentrations at or above the standard can impair lung functions in people with respiratory problems and can cause chest pain and shortness of breath even in the healthy population. This problem raises concern since more than 30 urban areas across the country still do not meet the national standard. |
Ozone concentrations for the California South Coast Air Basin predicted by the Caltech research model show a large region in which the national ozone standard of 120 parts per billion (ppb) are exceeded. Measurement data corroborate these predictions. Scientific studies have shown that human exposure to ozone concentrations at or above the standard can impair lung functions in people with respiratory problems and can cause chest pain and shortness of breath even in the healthy population. This problem raises concern since more than 30 urban areas across the country still do not meet the national standard. |
(1) Dissolved oxygen in Chesapeake Bay, (2) nitrate loading in the Potomac Basin, and (3) atmospheric nitric acid and wet deposition across the Eastern U.S. Three air and water models are linked together for cross-media modeling of the Chesapeake Bay. Atmospheric nitrogen deposition predicted by the atmospheric model (right) is the input load to the watershed model and the three- dimensional Bay model. The watershed model (lower left) delivers nitrate loads from each of the water basins to the three- dimensional Bay model (upper left). |
The colored plane floating above the block represents the simulated atmospheric temperature change at the earth's surface, assuming a steady one percent per year increase in atmospheric carbon dioxide to the time of doubled carbon dioxide. The surfaces in the ocean show the depths of the 1.0 and 0.2 degree (Celsius) temperature changes. The Southern Hemisphere shows much less surface warming than the Northern Hemisphere. This is caused primarily by the cooling effects of deep vertical mixing in the oceans south of 45 degrees South latitude. Coupled ocean-atmosphere climate models such as this one from NOAA/GFDL help improve scientific understanding of potential climate change. |
A scientist uses NASA's virtual reality modeling resources to explore the Earth's atmosphere as part of the Earth and Space Science Grand Challenge. |
Environmental Chemistry |
Groundwater Transport and Remediation |
Earthquake Ground Motion Modeling in Large Basins: The Quake Project |
High Performance Computing for Land Cover Dynamics |
Massively Parallel Simulations of Large-Scale, High- Resolution Ecosystme Models |
The 38-atom carbonate system on the left illustrates the most advanced modeling capability at the beginning of the HPCC Program; the 389-atom zeolite system on the right was produced by a recent simulation. Computational complexity effectively grows as the cube of the number of atoms, implying a thousand fold increase in computational power between the two images. |
The upper image shows a computational model of a valley that has been automatically partitioned for solution on a parallel computing system, one processor to a color. The lower image shows the response of the valley as a function of frequency and position within the valley. It is well known that the response of a building to an earthquake is greatest when the frequency of the ground motion is close to the natural frequency of the building itself. These results show that damage can vary considerably depending on building location and frequency characteristics. Obtaining this kind of information for large basins such as the Greater Los Angeles Basin requires high performance computing. |
This figure encodes the proportions of desert, grass, and forest within each pixel of a satellite image using color mixing. The Grand Challenge result, on the left, was produced using a new parallel algorithm and is a much more accurate estimate of mixture proportions than the least squares algorithm traditionally employed by environmental scientists. |
Visible Human Project |
Reconstruction of Positron Emission Tomography (PET) Images |
Image Processing of Electron Micrographs |
Understanding Human Joint Mechanisms |
Three-dimensional reconstruction of large icosahedral viruses. Shown are images of herpes simplex virus type 1 capsids, which illustrate the potential of new parallel computing methods. They show the location of a minor capsid protein called VP26 as mapped in experiments in which VP26 was first extracted from purified capsids by treatment with guanidine hydrochloride and then rebound to the capsids. The right half of the top image shows the depleted capsid and the rebound VP26 capsid, and the left half shows the three- dimensional reconstruction, as it would be obtained with a conventional sequential computer. Parallel computing extended the analysis to obtain the lower images, which improved the signal-to- noise ratio and the resolution from approximately 3.5 to under 3.0 nanometers. The clusters of six VP26 subunits, shown together in the top image, are clearly resolved in the bottom image. This work was conducted at NIH in collaboration with the University of Virginia. |
Protein and Nucleic Sequence Analysis |
Protein Folding Prediction |
Ribonucleic Acid (RNA) Structure Predition |
Biological Applications of Quantum Chemistry |
Biomolecular Design |
Biomolecular Modeling and Structure Determination |
Computational Structural Biology |
Biological Methods for Enzyme Catalysis |
Electrostatic field, shown in yellow, of the acetylcholinesterase enzyme. The known active site is shown in blue; the second 'back door' to the active site is thought be at the spot where the field lines extend toward the top of the picture. |
A portion of the Glucocorticoid Receptor bound to DNA; the receptor helps to regulate expression of the genetic code. |
The upper figure shows the known structure of the protein crambin from the Brookhaven Protein Data Base (PDB), and the lower figure is the best selection from a large ensemble of candidate chains, generated on a fcc (face-centered cubic) lattice using a guided replication Monte Carlo chain generation algorithm. Development of the algorithm and its serial and parallel implementations was funded by the HPCC Program. The three-dimensional structure prediction procedure was benchmarked at about 6 minutes on a 500- node Intel Paragon versus 24 hours on a single-processor IBM RS6000 workstation, a 225-fold speedup. |
Graphical representation of the bovine pancreatic ribonuclease enzyme. Many high-resolution X-ray structures are available for this enzyme, which makes it an ideal candidate for verifying new modeling methods. |
HPC for Learning |
A New View of Cognition |
The central image is the original camera shot and the surrounding images were generated from the original using image synthesisanalysis. |
Quantum Chromodynamics |
High Capacity Atomic-Level Simulations for the Design of Materials |
First Principals Simulation of Materials Properties |
Black Hole Binaries: Coalescence and Gravitational Radiation |
Scalable Hierarchical Particle Algorithms for Galzy Formation and Accretion Astrophysics |
Radio Synthesis Imaging |
Large Scale Structure and Galaxy Formation |
Simulation of gravitational clustering of dark matter. This detail shows one sixth of the volume computed in a cosmological simulation involving 16 million highly clustered particles that required load balancing on a massively parallel computing system. Many particles are required to resolve the formation of individual galaxy halos seen here as red/white spots. |
Simulation of Chorismate Mutase |
Simulation of Antibody-Antigen Association |
A Realistic Ocean Model |
Drag Control |
The Impact of Turbulence on Weather/Climate Prediction |
Shoemaker-Levy 9 Collision with Jupiter |
Vortex structure and Dynamics in Superconductors |
Molecular Dynamics Modeling |
Crash Simulation |
Advanced Simulation of Chemically Reacting Flows |
The complex between the fragment of a monoclonal antibody, HyHEL- 5, and hen-egg lysozyme. The key amino acid residues involved in complexation are displayed by large spheres. The negatively charged amino acids are in red and the positively charged ones in blue. The small spheres highlight other charged residues in the antibody fragment and hen-egg lysozyme. |
Simulation of circulation in the North Atlantic. Color shows temperature, red corresponding to high temperature. In most prior modeling, the Gulf Stream turns left past Cape Hatteras, clinging to the continental shoreline. In this simulation, however, the Gulf Stream veers off from Cape Hatteras on a northeast course into the open Atlantic, following essentially the correct course. |
Simulations on SDSC's Intel Paragon of turbulence over surfaces mounted with streamwise riblets. Computed turbulence intensities indicate that the reduction of fluctuations near the wall with riblets (bottom) results in a six percent drag reduction in this geometry. |
This image is a single frame from a volume visualization rendered from a computer model of turbulent fluid flow. The color masses indicate areas of vorticity that have stabilized within the volume after a specified period of time. The colors correspond to potential vorticity, with large positive values being blue, large negative values being red, and values near zero being transparent. |
Impact of the comet fragment. Image height corresponds to 1,000 kilometers. Color represents temperature, ranging from tens of thousands of degrees Kelvin (red), several times the temperature of the sun, to hundreds of degrees Kelvin (blue). |
Early stages in the formation of a magnetic flux vortex. The figure shows the penetration of a magnetic field into a thin strip of high- Tc superconducting material, which is imbedded in a normal metal, and the formation of a magnetic flux vortex. The red surface is an isosurface for the magnetic induction. The isosurface follows the top and bottom of the superconducting strip (not shown). The field penetrates from the left and right sides. Thermal fluctuations cause "droplets" of magnetic flux to be formed in the interior of the strip. As time progresses, these droplets may coalesce into vortices. One vortex is being spawned from the left sheet of the isosurface. These computations were done on Argonne's IBM SP system. |
MD simulation of a crystal block of 5 million silicon atoms as 11 silicon atoms implanted, each with an energy of 15keV. The simulation exhibits realistic phenomena such as amorphization near the surface and the channeling of some impacting atoms. These snapshots show the atoms displaced from their crystal positions (damaged areas) and the top layer (displayed in gray) at times 92 and 277 femtoseconds (10-15 seconds) after the first impact. |
Illustrative of the computing power at the Center for Computational Science is the 50 percent offset crash of two Ford Taurus cars moving at 35 mph shown here. The Taurus model is detailed; the results are useful in understanding crash dynamics and their consequences. These results were obtained using parallel DYNA-3D software developed at Oak Ridge. Run times of less than one hour on the most powerful machine are expected. |
View of fluid streamlines and the center plane temperature distribution in a vertical disk, chemical vapor deposition reactor. Simulations such as these allow designers to produce higher uniformity semiconductor materials by eliminating unwanted detrimental effects such as fluid recirculation. |
NASA simulation of temperature fluctuations (dark: cool; light: hot) in a layer of convectively unstable gas (upper half) overlying a convectively stable layer (lower half) within the deep interior of a Sun-like star. This simulation was performed on the Argonne IBM SP-1. |
Digital Libraries |
Public Access to Government Information |
Electronic Commerce |
Civil Infrastructure |
Education and Lifelong Learning |
Energy Management |
Environmental Monitoring |
Health Care |
Maunfacturing Processes and Products |
Joint Digital Libraries rresearch Initiative |
Digital Library Technology Products |
Satellite Weather data Dissemination |
Environmental Decision Support |
Computer Science Technical reports Testbeds |
Unified Medical Language system (UMLS) |
CALS Library |
Earth Data |
Education |
Health Care Data |
Computer-Based Patient Records (CBPR) |
Radiation Treatment Planning |
Functional Neurological Image Analysis |
Project Hippocrates: HIgh PerfOrmance Computing for Robot- AssisTEd Surgery |
Prototypes for Clinic-Based Collaboration |
Trusted Interoperation of Health Care information Systems |
Collaboratory for Microscpoic Digital Anatomy (CMDA) |
Distributed Imaging Over Gigabit Networks |
A source image slice with a beam placed and some contours drawn. The contours denote regions of different density and are subsequently used in the radiation dose calculation in place of the source image. The beam specifies the path of the central ray, width, placement, and the presence of a blocking wedge. |
Single slices of MRI scans of two normal children of different ages. The leftmost scan is warped to have the form of the middle scan using the tie-points identified by the squares. The warped image is shown at right. This work was conducted at NIH's National Institute of Mental Health. |
This Gridbrowser interface shows (1) a low magnification survey with gridlines identifying the source of the higher magnification view, (2) cross-hairs identifying the current position of the microscope stage (which can be changed remotely), and (3) a red- green stereo view of the three- dimensional volume derived from acquired data. |
An example of the types of user interfaces required to visualize data on manufacturing activities in a production facility. A prototype facility was simulated to provide for real-time views into the factory control system database and to simulate manufacturing data access by multiple users. |
Observations:
|
HPCCI agencies introduced agendas |
NII crept up on HPC |
WWW took everything by storm |
HPCC program may now be unmanagable, future of "high end" is uncertain |
Software tools: always the critical issue |
What is NPAC? |
HPCC
|
Technical Topics (Opportunities for Collaboration)
|
Data Parallelism - universal form of scaling parallelism |
Functional Parallelism - Important but typically modest speedup. - Critical in multidisciplinary applications. |
On any machine architecture
|
Simple, but general and extensible to many more nodes is domain decomposition |
All successful concurrent machines with
|
Have obtained parallelism from "Data Parallelism" or "Domain Decomposition" |
Problem is an algorithm applied to data set
|
The three architectures considered here differ as follows:
|
2 Different types of Mappings in Physical Spaces |
Both are static
|
Different types of Mappings -- A very dynamic case without any underlying Physical Space |
c)Computer Chess with dynamic game tree decomposed onto 4 nodes |