Given by Geoffrey C. Fox at Several Presentations on Second half of 1996. Foils prepared July 6,1996
Abstract * Foil Index for this file
This collects together Miscellaneous foils used in Research Presentations during second half of 1996 |
The first group of foils were used in trip to China July 12-28 1996 |
This table of Contents Abstract
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
This collects together Miscellaneous foils used in Research Presentations during second half of 1996 |
The first group of foils were used in trip to China July 12-28 1996 |
http://www.npac.syr.edu/users/gcf/hpcc96status/index.html |
Presented during Trip to China July 12-28,1996 |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
We describe the structure of seven talks making up this review of HPCC from today to the Web and Petaflop performance in future |
Here we describe current status with HPCC in some sense both a failure and a great success |
This requires looking at hardware, software and the critical lack of commercial adoption of this technology |
We discuss COTS and trickle up and down technology strategies |
We describe education and interdisciplinary computational science in both simulation and information arenas |
http://www.npac.syr.edu/users/gcf/hpcc96hardware/index.html |
Presented during Trip to China July 12-28,1996 |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
We describe basic technology driver -- the CMOS Juggernaut -- and some new approaches that could be important 10-20 years from now |
We describe from elementary point of view the basics of parallel(MPP) architectures |
We discuss current situation for tightly coupled systems -- convergence to distributed shared memory |
We discuss clusters of PC's/workstations -- MetaComputing |
http://www.npac.syr.edu/users/gcf/hpcc96software/index.html |
Presented during Trip to China July 12-28,1996 |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
We start with an overall discussion of types of software environments and when they apply
|
Data Parallel and Message Passing are still critical but the situation is confused by immaturity of parallel compilers |
We then discuss current work involving Xiaoming Li with HPF and the Parallel Runtime Compiler Consortium |
MetaComputing is an emerging field oof importance and we sketch our plans for MetaWeb |
Java threatens to change the ballgame! |
http://www.npac.syr.edu/users/gcf/hpcc96appls/index.html |
Presented during Trip to China July 12-28,1996 |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
We describe HPCC Applications starting with the many successes of Federal Grand Challenge Program in Government and Academic areas |
As a survey discovered, this does not translate into acceptance by industry |
We describe the trend to the the more broadly based National Challenges |
Industry has neither adopted the use of HPCC in their business operations nor has a viable software and systems industry (at high end) been created |
The resolution of "dilemma" of Industry v. National need in government and academia will underlie future programs |
http://www.npac.syr.edu/users/gcf/hpcc96pse/index.html |
Presented during Trip to China July 12-28,1996 |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
Problem Solving Environments -- PSE's -- are seen in all fields from health care, education to engineering design of a new aircraft |
We illustrate with telemedicine Bridge concept |
And show in detail integration of NII and computation in ASOP -- next generation integrated manufacturing and design |
We give a couple of simple Web Computing Examples |
And outline NPAC's Web based strategy |
We describe needed enabling technologies and give a set of recommendations for progress coming from a panel led by John Rice of Purdue |
http://www.npac.syr.edu/users/gcf/hpcc96petaflop/index.html |
Presented during Trip to China July 12-28,1996 |
Uses material from Peter Kogge -- Notre Dame |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
This describes some aspects of a national study of the future of HPCC which started with a meeting in February 1994 at Pasadena |
The SIA (Semiconductor Industry Association) projections are used to define feasible memory and CPU scenarios |
We describe hardware architecture with Superconducting and PIM (Processor in Memory possibilities) for CPU and optics for interconnect |
The Software situation is captured by notes from a working group at June 96 Bodega Bay meeting |
The role of new algorithms is expected to be very important |
http://www.npac.syr.edu/users/gcf/hpcc96web/index.html |
Presented during Trip to China July 12-28,1996 |
Geoffrey Fox |
NPAC |
Syracuse University |
111 College Place |
Syracuse NY 13244-4100 |
This describes Our Approach focussing on Integration of Information and Computing and concentrating on coarse grain functionality |
WebFlow : Dataflow (AVS) using Web with databases and numbercrunching |
MetaWeb : Metacomputing or rather cluster management using Web |
RSA Factoring was our first succesful example |
Financial Modelling will be an obviously important commercial application |
Java plays a critical role in high level user interfaces for visual programming, visualization of data and performance |
Web Interfaces to HPF will be particularly useful initially in education -- programming laboratories on the Web |
VRML is an interesting 3D datastructure |
I find study interesting not only in its result but also in its methodology of several intense workshops combined with general discussions at national conferences |
Exotic technologies such as "DNA Computing" and Quantum Computing do not seem relevant on this timescale |
Note clock speeds will NOT improve much in the future but density of chips will continue to improve at roughly the current exponential rate over next 10-20 years |
Superconducting technology is currently seriously limited by no appropriate memory technology that matches factor of 100-1000 faster CPU processing |
Current project views software as perhaps the hardest problem |
All proposed designs have VERY deep memory hierarchies which are a challenge to algorithms, compilers and even communication subsystems |
Major need for hig-end performance computers comes from government (both civilian and military) applications
|
Government must develop systems using commercial suppliers but NOT relying on traditionasl industry applications to motivate |
So Currently Petaflop initiative is thought of as an applied development project whereas HPCC was mainly a research endeavour |
Collaborative computing technology |
Configuration control and human-in-the-loop (Computational steering) |
Computational geometry and grid generation |
Scalable algorithms |
Scalable solver libraries |
Parallel/Distributed computing --- metacomputing |
Fault tolerance and security |
Federated multi-media databases |
File system and I/O technologies |
Visualization including virtual reality, televirtuality etc. |
Interactive interface development (GUI) technologies |
Symbolic manipulation and automatic code generation |
Artificial intelligence and expert systems |
Performance monitoring and modeling |
"Low-level" virtual machine such as MPI, PVM etc. |
"Fine grain high-level" languages (C++, HPF etc.) |
Software engineering and coarse grain software (software bus) integration |
"Web-ware" and scripting middle-ware(Perl, Java, VRML, Python, etc.) |
Agent search and communication systems |
Wrapper technology for legacy systems and interoperability |
Interface specification support and information exchange protocols (such as CORBA, Opendoc, metadata and web standards) |
Object oriented software technology - object transport and management |
PSE templates and frameworks |
Note that some steps -- such as Collaborative Environments will come from general NII activities |
Others such as integrated grid generators and geometry plus CFD solvers, and distributed scientific objects must come from HPCC |
Originally $2.9 billion over 5 years starting in 1992 and
|
This drove race to Teraflop performance and is now OVER! |
The Grand Challenges
|
Most of the activities it started are ongoing! |
It achieved goal of Teraflop performance -- see Intel P6 machine at Sandia |
But it failed to develop a viable commercial base |
And although hardware peak performs at advertised rate, the software environment is poor
|
Academic Activities -- NSF Supercomputer centers -- are very healthy as much easier to put such codes on MPP as short in lifetime and lines of code |
Next initiatives -- based on PetaFlop goal -- will include a federal development as well as research component as can't assume "brilliant research" will be picked up by industry |
There are seven talks in this series: |
HPCC Status -- this talk -- Overall Technical and Political Status |
HPCC Today I -- MPP Hardware Architectures and Machines |
HPCC Today II -- Software |
HPCC Today III -- Applications -- Grand Challenges Industry |
HPCC Tomorrow I -- Problem Solving Environments |
HPCC Tomorrow II -- Petaflop (10^15 Operations per second) in the year 2007? |
HPCC Tomorrow III -- The use of Web Technology in HPCC |
The future sees:
|
We will discuss the first two here, the next three in "Petaflop futures" and I leave the last two to a future generation! |
Today we see the following "CMOS Juggernaut" Architectures |
SIMD: No commercial or academic acceptance except for special purpose military (signal processing) and commercial(database indexing) applications |
Special Purpose: Such as GRAPE N-body machine which achieves a Teraflop today and a petaflop in a few years -- requires small memory and small CPU's |
MIMD Distributed Memory:
|
Shared Memory
|
Scalable Network (ATM) and Platforms (PC's running Windows 95) |
MetaComputing Built from PC's and ATM as commodity parts (COTS) |
The Computing World from Smart Card to Enterprise Server |
MPI and PVM as Message Passing Systems are very healthy but the essential ideas are very old -- 10 to 20 years |
They are used because the systems work well (as relatively easy to build) and the users understand what to expect
|
Parallel C++ is very confused with many standards |
HPF -- Data Parallel Fortran -- is a standard challenged by industry somewhat as they find compilers difficult and wish it was simpler |
Users find performance of HPF often disappointing and find it often is hard to predict what compiler will do
|