Given by Geoffrey C. Fox at General on 1997. Foils prepared 26 January 97
Outside Index
Summary of Material
This uses material from Paul Smith and Peter Kogge as well as Fox |
We describe the "National PetaFlop Study(s)" and what you can expect with or without a specific initiative |
We discuss traditional, Processor in Memory, Superconducting, Special Purpose architectures as well as future Quantum Computers! |
We survey possible applications, new needs and opportunities for software as well as the technologies and designs for new machines one can expect in the year 2007! |
We review findings of studies and structure of a possible initiative |
Outside Index Summary of Material
Geoffrey Fox |
Syracuse University |
111 College Place |
Syracuse |
New York 13244-4100 |
This uses material from Paul Smith and Peter Kogge as well as Fox |
We describe the "National PetaFlop Study(s)" and what you can expect with or without a specific initiative |
We discuss traditional, Processor in Memory, Superconducting, Special Purpose architectures as well as future Quantum Computers! |
We survey possible applications, new needs and opportunities for software as well as the technologies and designs for new machines one can expect in the year 2007! |
We review findings of studies and structure of a possible initiative |
I. Workshop series... background. |
II. Major findings & recommendations from the PetaFLOPS workshops. |
III. Key drivers for advanced computational capabilities beyond HPCC. |
IV. PetaFLOPS Architecture Point Designs & SW Design Studies. |
V. A National program concept. |
VI. Future actions to mold an R&D program. |
PetaFLOPS I
|
PetaFLOPS Bodega Bay Summer Study
|
PetaFLOPS Architecture Workshop, PAWS'96
|
PetaSOFT'96
|
Sponsoring Agencies |
NASA |
NSF |
DOE |
DARPA |
NSA |
BMDO |
t |
Private |
sector |
Academic |
Federal |
National |
Laboratories |
To identify immediate & future applications. |
To provide standard base (PetaFLOPS I) to measure advances in PetaFLOPS R&D. |
To identify critical enabling technologies. |
To assist technology directors to plan for future programs beyond HPCC. |
I. Workshop series... background: Coordinating Chairs |
Dr. Paul H. Smith.....................................................................General |
Special Assistant, Advanced Computing Technology |
U.S.. Department of Energy |
Dr. David Bailey .....................................................................Algorithms |
NASA/Ames Research Center |
Dr. Ian Foster ...........................................................................Software |
Division of Mathematics and Computer Science |
Argonne National Laboratory |
Prof.. Geoffrey Fox ............................................................................................Architecture |
Departments of Physics & Computer Science |
Prof.. Peter Kogge ...................................................................Architecture |
McCourtney Professor of Computer Science & Engineering |
University of Notre Dame |
I. Workshop series... background: Coordinating Chairs |
Prof.. Sidney Karin .......................................................................General |
Director for Advanced Computational Science & Engineering |
University of California, San Diego |
Dr Paul Messina ...........................................................................PetaFLOPS-I |
Director, Center for Advanced Computing |
California Institute of Technology |
Dr. Thomas Sterling .....................................................................Architecture |
Senior Scientist |
Jet Propulsion Laboratory |
Dr. Rick Stevens ..........................................................................Applications |
Director, Mathematics & Computer Science Division |
Argonne National Laboratory |
Dr. John Van Rosendale ..............................................................Point Design |
Division of Advanced Scientific Computing |
National Science Foundation |
For "Convential" MPP/Distributed Shared Memory Architecture |
Now(1996) Peak is 0.1 to 0.2 Teraflops in Production Centers
|
In 1999, one will see production 1 Teraflop systems |
In 2003, one will see production 10 Teraflop Systems |
In 2007, one will see production 50-100 Teraflop Systems |
Memory is Roughly 0.25 to 1 Terabyte per 1 Teraflop |
If you are lucky/work hard: Realized performance is 30% of Peak |
Everybody now believes in COTS -- Consumer On the Shelf Technology -- one must use commercial building blocks for any specialized system whether it be a DoD weopens program or high end Supercomputer
|
COTS for hardware can be applied to a greater or less extent
|
COTS for Software is less common but (I expect) to become much more common
|
Currently MPP's have COTS processors and specialized networks but this could reverse
|
Thus estimate that 250,000 transistors (excluding on chip cache) is optimal for performance per square mm of silicon
|
Again simplicity is optimal but this requires parallelism |
Contrary trend is that memory dominates use of silicon and so performance per square mm of silicon is often not relevant |
Tightly Coupled MPP's were (SP2,Paragon,CM5 etc) distributed memory but at least at the low end they are becoming hardware assisted shared memory
|
Note this is an example of COTS at work -- SGI/Sun/.. Symmetric Multiprocessors (Power Challenge from SGI) attractive as bus will support upto 16 processors in elegant shared memory software world.
|
Clustering such SGI Power Challenge like systems produces a powerful but difficult to program (as both distributed and shared memory) heterogeneous system |
Meanwhile Tera Computer will offer a true Uniform Memory access shared memory using ingenious multi threaded software/hardware to hide latency
|
Trend I -- Hardware Assisted Tightly Coupled Shared Memory MPP's are replacing pure distributed memory systems |
Trend II -- The World Wide Web and increasing power of individual workstations is making geographically distributed heterogeneous distributed memory systems more attractive |
Trend III -- To confuse the issue, the technology trends in next ten years suggest yet different architecture(s) such as PIM |
Better use Scalable Portable Software with conflicting technology/architecture trends BUT must address latency agenda which isn't clearly portable! |
I find study interesting not only in its result but also in its methodology of several intense workshops combined with general discussions at national conferences |
Exotic technologies such as "DNA Computing" and Quantum Computing do not seem relevant on this timescale |
Note clock speeds will NOT improve much in the future but density of chips will continue to improve at roughly the current exponential rate over next 10-20 years |
Superconducting technology is currently seriously limited by no appropriate memory technology that matches factor of 100-1000 faster CPU processing |
Current project views software as perhaps the hardest problem |
All proposed designs have VERY deep memory hierarchies which are a challenge to algorithms, compilers and even communication subsystems |
Major need for hig-end performance computers comes from government (both civilian and military) applications
|
Government must develop systems using commercial suppliers but NOT relying on traditionasl industry applications to motivate |
So Currently Petaflop initiative is thought of as an applied development project whereas HPCC was mainly a research endeavour |
PetaFLOPS possible; accelerate goals to 10 years. |
Many important application drivers exist. |
Memory dominant implementation factor. |
Cost, power & efficiency dominate. |
Innovation critical, new technology necessary. |
Layered SW architecture mandatory. |
Opportunities for immediate SW effort. |
New technology means paradigm shift.
|
Memory bandwidth. |
Latency. |
Software important. |
Closer relationship between architecture and programming is needed. |
Role of algorithms must improve. |
Conduct point design studies.
|
Develop engineering prototypes.
|
Start SW now, independent of HW. |
Develop layered software architecture for scalability and code reuse |
Explore algorithms for special purpose & reconfigurable structures. |
Support & accelerate R&D in paradigm shift technologies:
|
Perform detailed applications studies at scale. |
Develop petaFLOPS scale latency management. |
Nation's experts participated:
|
Strong need for computing at the high end. |
PetaFLOPS levels of performance are feasible. |
Preliminary set of goals for the next decade formulated with a PetaFLOPS system as the end product. |
II. Major Findings & Recommendations: |
Workshops Summaries. |
There are compelling applications that need that level of performance. |
PetaFLOPS levels of performance are feasible, but substantial research is needed. |
Private sector is not going to do it alone. |
TeraFLOPS machine architecture in hand. |
Programming still is explicit message passing. |
TeraFLOPS applications are coarse grain |
Latency management not showstopper for TeraFLOPS. |
Operating systems and tools provide relatively little support for the users |
Parallelism has to be managed explicitly |
Applications that require petaFLOPS can already be identified
|
The need for ever greater computing power will remain. |
PetaFLOPS systems are right step for the next decade |
Nuclear Weopens Stewardship (ASCI) |
Cryptology and Digital Signal Processing |
Satellite Data Analysis |
Climate and Environmental Modeling |
3-D Protein Molecule Reconstruction |
Real-Time Medical Imaging |
Severe Storm Forecasting |
Design of Advanced Aircraft |
DNA Sequence Matching |
Molecular Simulations for nanotechnology |
Large Scale Economic Modelling |
Intelligent Planetary Spacecraft |
Why does one need a petaflop (1015 operations per second) computer? |
These are problems where quite viscous (oil, pollutants) liquids percolate through the ground |
Very sensitive to details of material |
Most important problems are already solved at some level, but most solutions are insufficient and need improvement in various respects:
|
Oil Resevoir Simulation |
Geological variation occurs down to pore size of rock - almost 10-6 metres - model this (statistically) |
Want to calculate flow between wells which are about 400 metres apart |
103x103x102 = 108 grid elements |
30 species |
104 time steps |
300 separate cases need to be considered |
3x109 words of memory per case |
1012 words total if all cases considered in parallel |
1019 floating point operation |
3 hours on a petaflop computer |
Semiconductor component technology
|
Architecture
|
System software
|
Extrapolated from SIA Projections to year 2007 -- See Chapter 6 of Petaflops Report -- July 94 |
Extrapolated from SIA Projections to year 2007 -- See Chapter 6 of Petaflops Report -- July 94 |
Conventional (Distributed Shared Memory) Silcon
|
Note Memory per Flop is much less than one to one |
Natural scaling says time steps decrease at same rate as spatial intervals and so memory needed goes like (FLOPS in Gigaflops)**.75
|
Superconducting Technology is promising but can it compete with silicon juggernaut? |
Should be able to build a simple 200 Ghz Superconducting CPU with modest superconducting caches (around 32 Kilobytes) |
Must use same DRAM technology as for silicon CPU ? |
So tremendous challenge to build latency tolerant algorithms (as over a factor of 100 difference in CPU and memory speed) but advantage of factor 30-100 less parallelism needed |
Processor in Memory (PIM) Architecture is follow on to J machine (MIT) Execube (IBM -- Peter Kogge) Mosaic (Seitz)
|
One could take in year 2007 each two gigabyte memory chip and alternatively build as a mosaic of
|
12000 chips (Same amount of Silicon as in first design but perhaps more power) gives:
|
Performance data from uP vendors |
Transistor count excludes on-chip caches |
Performance normalized by clock rate |
Conclusion: Simplest is best! (250K Transistor CPU) |
Millions of Transistors (CPU) |
Millions of Transistors (CPU) |
Normalized SPECINTS |
Normalized SPECFLTS |
Fixing 10-20 Terabytes of Memory, we can get |
16000 way parallel natural evolution of today's machines with various architectures from distributed shared memory to clustered heirarchy
|
5000 way parallel Superconducting system with 1 Petaflop performance but terrible imbalance between CPU and memory speeds |
12 million way parallel PIM system with 12 petaflop performance and "distributed memory architecture" as off chip access with have serious penalities |
There are many hybrid and intermediate choices -- these are extreme examples of "pure" architectures |
Overall Roadmap Technology Characteristics from SIA (Semiconductor Industry Association) Report 1994 |
L=Logic, D=DRAM, A=ASIC, mP = microprocessor |
Overall Roadmap Technology Characteristics from SIA (Semiconductor Industry Association) Report 1994 |
Overall Roadmap Technology Characteristics from SIA (Semiconductor Industry Association) Report 1994 |
Overall Roadmap Technology Characteristics from SIA (Semiconductor Industry Association) Report 1994 |
We can choose technology and architecture separately in designing our high performance system |
Technology is like choosing ants people or tanks as basic units in our society analogy
|
In HPCC arena, we can distinguish current technologies
|
Near term technology choices include
|
Further term technology choices include
|
It will cost $40 Billion for next industry investment in CMOS plants and this huge investment makes it hard for new technologies to "break in" |
Architecture is equivalent to organization or design in society analogy
|
We can distinguish formal and informal parallel computers |
Informal parallel computers are typically "metacomputers"
|
Metacomputers are a very important trend which uses similar software and algorithms to conventional "MPP's" but have typically less optimized parameters
|
Formal high performance computers are the classic (basic) object of study and are |
"closely coupled" specially designed collections of compute nodes which have (in principle) been carefully optimized and balanced in the areas of
|
In society, we see a rich set of technologies and architectures
|
With several different communication mechanisms with different trade-offs
|
Quantum-Mechanical Computers by Seth Lloyd, Scientific American, Oct 95 |
Chapter 6 of The Feynman Lectures on Computation edited by Tony Hey and Robin Allen, Addison-Wesley, 1996 |
Quantum Computing: Dream or Nightmare? Haroche and Raimond, Physics Today, August 96 page 51 |
Basically any physical system can "compute" as one "just" needs a system that gives answers that depend on inputs and all physical systems have this property |
Thus one can build "superconducting" "DNA" or "Quantum" computers exploiting respectively superconducting molecular or quantum mechanical rules |
For a "new technology" computer to be useful, one needs to be able to
|
Conventional computers are built around bit ( taking values 0 or 1) manipulation |
One can build arbitarily complex arithmetic if have some way of implementing NOT and AND |
Quantum Systems naturally represent bits
|
Interactions between quantum systems can cause "spin-flips" or state transitions and so implement arithmetic |
Incident photons can "read" state of system and so give I/O capabilities |
Quantum "bits" called qubits have another property as one has not only
|
Lloyd describes how such coherent states provide new types of computing capabilities
|
Superconductors produce wonderful "wires" which transmit picosecond (10^-12 seconds) pulses at near speed of light
|
Niobium used in constructing such superconducting circuits can be processed by similar fabrication techniques to CMOS |
Josephson Junctions allow picosecond performance switches |
BUT IBM (!969-1983) and Japan (MITI 1981-90) terminated major efforts in this area |
New ideas have resurrected this concept using RSFQ -- Rapid Single Flux Quantum -- approach |
This naturally gives a bit which is 0 or 1 (or in fact n units!) |
This gives interesting circuits of similar structure to CMOS systems but with a clock speed of order 100-300GHz -- factor of 100 better than CMOS which will asymptote at around 1 GHz (= one nanosecond cycle time) |
At least two major problems: |
Semiconductor industry will invest some some $40B in CMOS "plants" and infrastructure
|
Cannot build memory to match CPU speed and current designs have superconducting CPU's (with perhaps 256 Kbytes superconducting memory per processor) but conventional CMOS memory
|
Superconducting technology also has a bad "name" due to IBM termination! |
Sponsored by NSF, DARPA and NASA |
Eight awards made of $100,000 each |
6 month study of architecture / |
SW environment / algorithms |
Reconfigurable OO architecture |
Processor in memory architecture |
Algorithmic focus |
Hierarchical design |
Aggressive cache only architecture |
Architecture for N-body problems |
Single quantum flux superconducting design |
Optical interconnect |
Relatively Conventional but still Innovative!
|
Special Purpose Systems
|
Architecture Innovation (Perhaps Special Purpose)
|
Radical Technology Innovation (Superconducting Processors)
|
See Original Foil |
See Original Foil |
Cluster of workstations or PC's |
Heterogeneous MetaComputer System |
N body problems (e.g. Newton's laws for one million stars in a globular cluster) can have succesful special purpose devices |
See GRAPE (GRAvity PipE) machine (Sugimoto et al. Nature 345 page 90,1990)
|
Note GRAPE uses EXACTLY same parallel algorithm that one finds in the books (e.g. Solving Problems on Concurrent Processors) for N-body problems on classic distributed memory MIMD machines |
See Original Foil |
See Original Foil |
GRAPE will execute the classic O(N^2) (parallel) N body algorithm BUT this is not the algorithm used in most such computations |
Rather there is the O(N) or O(N)logN so called "fast-multipole" algorithm which uses hierarchical approach
|
So special purpose devices cannot usually take advantage of new nifty algorithms! |
Storage |
0.5 MB |
0.5 MB |
0.05 MB |
0.128 MB |
Chip |
EXECUBE |
AD SHARC |
TI MVP |
MIT MAP |
Terasys PIM |
First |
Silicon |
1993 |
1994 |
1994 |
1996 |
1993 |
Peak |
50 Mips |
120 Mflops |
2000 Mops |
800 Mflops |
625 M bit |
ops |
0.016 MB |
MB/ |
Perf. |
0.01 |
MB/Mip |
0.005 |
MB/MF |
0.000025 |
MB/Mop |
0.00016 |
MB/MF |
0.000026 |
MB/bit op |
Organization |
16 bit |
SIMD/MIMD CMOS |
Single CPU and |
Memory |
1 CPU, 4 DSP's |
4 Superscalar |
CPU's |
1024 |
16-bit ALU's |
MB per cm2 |
MF per cm2 |
MB/MF ratios |
MB/MF ratios |
See Original Foil |
PetaFLOPS Applications which are grouped into sets with an interface to their own |
Problem Solving Environments |
Application Level or Virtual Problem Interface ADI |
Operating System Services |
Multi Resolution Virtual Machine Interfaces joining at lowest levels with |
Machine Specific Software |
Hardware Systems |
The mission critical applications |
Development of shared problem solving environments with rich set of application targeted libraries and resources |
Development of common systems software |
Programming environments from compilers to multi-level runtime support at the machine independent ADI's |
Machine specific software including lowest level of data movement/manipulation |
Start now on initial studies to explore the possible system architectures. |
These "PetaFLOPS software point studies" should be interdisciplinary involving hardware, systems software and applications expertise. |
August 28 1996 |
Geoffrey Fox |
All proposed hardware architectures have a complex memory hierarchy which should be abstracted with a software architecture
|
This implies a layered software architecture reflected in all components
|
The Software Architecture should be defined early on so that hardware and software respect it!
|
Users and Compilers must be able to have full control of data movement and placement in all parts of petaflop system |
Size and Complex Memory Structure of PetaFlop machines represent major challenges in scaling existing Software Concepts |
Well the rest of the Software World is Changing with emergence of WebWindows Environment! |
Current approaches (HPF,MPI) lack needed capability to address memory hierarchy of either today's or any future contemplated high performance architecture -- whether sequential or parallel |
Problem Solving Environments are needed to support complex applications implied by both Web and increasing capabilities of scientific simulations |
So I suggest rethinking High Performance Computing Software Models and Implementations! |
PetaFlop Applications which are grouped into sets with an interface to their own |
Problem Solving Environments |
Application Level or Virtual Problem Interface ADI |
Operating System Services |
Multi Resolution Virtual Machine Interfaces joining at lowest levels with |
Machine Specific Software |
Hardware Systems |
Domain Specific Application Problem Solving Environment |
Numerical Objects in (C++/Fortran/C/Java) High Level Virtual Problem |
Expose the Coarse Grain Parallelism of the Real Complex Computer |
Expose All Levels of Memory Hierarchy of the Real Complex Computer |
Virtual |
Problem /Appl. ADI |
Multi |
Level |
Machine ADI |
Pure Script (Interpreted) |
High Level Language but Optimized Compilation |
Machine Optimized RunTime |
Semi-Interpreted |
a la Applets |
MPI represents data movement with the abstraction for a structure of machines with just two levels of memory
|
This was a reasonable model in the past but even today fails to represent complex memory structure of typical microprocessor node |
Note HPF Distribution Model has similar (to MPI) underlying relatively simple Abstraction for PEM |
This addresses memory hierarchy intra-processor as well as inter-processor
|
Level 2 Cache |
Level 1 Cache |
Application Specific Problem Solving Environment |
Coarse Grain Coordination Layer e.g. AVS |
Massively Parallel Modules (libraries) -- such as DAGH HPF F77 C HPC++ HPJava |
Fortran or C plus generic message passing (get,put) and generic memory hierarchy and locality control |
Assembly Language plus specific (to architecture) data movement, shared memory and cache control |
High |
Level |
Low Level |
Main JNAC Program is a mix of both research and development with |
Development is focused on JNAC machines and identified application areas and along lines of a Broad Systems Architecture established (evaluated and evolved) by JNAC |
Work should start now on initial studies to explore the possible system architectures and |
Suggest locations for the "sweet-spots" (necks in the hour glass) to define interfaces |
These "petaflop software point studies" should be interdisciplinary involving hardware, systems software and applications expertise |
Establish and Review Software Architecture and consistent use of Interfaces |
} |
} |
} |
} |
JNAC Architecture Review |
Board |
The Five Software Development Areas |
} |
The mission Critical Applications |
Development of Approximately 3 shared Problem Solving Environments with rich set of application targeted libraries and resources
|
Development of Common Systems Software |
Programming Environments from Compilers to multi-level runtime support at the machine independent ADI's |
Machine Specific software including lowest level of data movement/manipulation |
code generation |
memory management |
routing/interconnect |
thread management |
diagnostics |
fault containment |
interupt handling |
device drivers |
scalable filesystems |
networking interfaces |
scheduling |
HL-memory management |
HL-latency management |
performance data |
debugging tools |
intermediate code representations |
object files (a.out) |
HL-resource management |
query of systems state |
operating systems services |
compiler middleware |
basic visualization tools |
numerical libraries |
Define a "clean" model for machine architecture
|
Define a low level "Program Execution Model" (PEM) which allows one to describe movement of information and computation in the machine
|
On top of low level PEM, one can build an hierarchical (layered) software model
|
One can program at each layer of the software and augment it by "escaping" to a lower level to improve performance
|
This is not really a simple stack but a set of complex relations between layers with many interfaces and modules |
Interfaces are critical ( for composition across layers)
|
Enable Next |
10000 |
Users |
For First 100 |
Pioneer Users |
Higher Level abstractions |
nearer to |
application domain |
Increasing Machine |
detail, control |
and management |
1)Deep Memory Hierarchies present New Challenges to High performance Implementation of programs
|
2)There are two dimensions of memory hierarchy management
|
3)One needs a machine "mode" which supports predictable and controllable memory system leading to communication and computation with same characteristics
|
4)One needs a low level software layer which provides direct control of the machine (memory hierarchy etc.) by a user program
|
5)One needs a layered (hierarchical) software model which supports an efficient use of multiple levels of abstraction in a single program.
|
6)One needs a set of software tools which match the layered software (programming model)
|
1)Explore issues in design of petaComputer machine models which will support the controllable hierarchical memory systems in a range of important architectures
|
2)Explore techniques for control of memory hierarchy for petaComputer architectures
|
3)Explore issues in designing layered software architectures -- particularly efficient mapping and efficient interfaces to lower levels
|
There are compelling applications |
New architectures need to be investigated |
Component technologies need to developed |
Major advances are needed in system software and tools |
Industry is less likely than ever to push limits. |
Key, focused R&D must be explicitly funded |
Program is mostly D augmented with increases in R in HW/SW. |
Advanced systems designed and prototyped by the program. |
D will need strong central management. |
Applications tightly coupled with coordinated SW development groups. |
Target dozens of applications (not 100's) |
100's of programmers not thousands |
Deploy PF class systems < 10 years |
Starting in FY98 |
Multiple technology options |
New technologies and architectures |
Balance vendor vs direct development |
Open RFP for future systems |
Three "tracks" for illustration (might be more or less)
|
Deploy systems continuously |
Span generations with software model |
Pull with RFPs |
Push with technology investments |
Chip Interface:
|
Optical Networks:
|
Superconducting Memories:
|
Holographic Memories |
Natural Evolution Systems:
|
Special Purpose Architecture:
|
Hybrid Technology Architecture Development:
|
PetaFLOPS Languages:
|
Operating Systems:
|
Runtime Systems |
Algorithms to reduce latency associated with |
petaFLOPS scale: |
memory hierarchies & |
processor ensembles. |
Driver Applications:
|
SW level interface definitions |
Projection of performance requirements to lower levels (performance based design) |
Applications analysis wrt specific programming models (machines) |
Experimental testbeds simulated/modeled on existing MPP |
Next Three foils isolate some differences and commonalities in two programs |
Both set a hardware goal (teraflop for HPCC and petaflop for JNAC) to focus activity but in each case systems and applications were main justification |
Both couple applications with software and architectures in multidisciplinary teams with multi-agency support |
HPCC was dominantly research
|
HPCC inevitably developed MPP's and transferred parallel computing to computing mainstream
|
HPCC aimed at Grand challenges in Industry Government and Academia
|
HPCC developed software (PSE's) largely independently in each Grand Challenge
|
HPCC tended to develop hardware with rapidly changing architectures which software "chased" rather laboriously
|
HPCC aimed to transfer technology to Industry for commercialization
|
HPCC is Research -->Capitalization-->Product
|
HPCC was a broad program aimed at "all" (large scale) users of computers
|
Need to invest in computing at the high end. |
PetaFLOPS level of performance are feasible. |
Private sector is not going to do it alone. |
Conduct detailed PetaFLOPS architecture design & simulation studies. |
Initiate early software development of layered architecture. |
Develop PetaFLOPS scale latency management |
Accelerate R&D in advanced technologies. |
Invent algorithms for special purpose and reconfigurable structures. |
The PetaFLOPS Frontier (Oct.. 96)
|
PetaFLOPS Algorithms Workshop (Apr.. 97) |
PetaFLOPS II Conference (Sep.. 97)
|
Engage community in establishing challenges, directions, topics for research |
Location: Williamsburg Hospitality House, Williamsburg, VA; April 20-25, 1997 |
Chair; David Bailey, NASA Ames |
Objectives:
|
Coordinate with High End Computing & Computation (HECC) Working Group. |
Develop Technical Approach --NOW |
Strategy for developing National Initiative. |
Multi-agency efforts. |
Federal agencies plan for FY'98 budget submission |