Received: from nova.npac.syr.edu by spica.npac.syr.edu (4.1/I-1.98K) id AA14438; Wed, 3 May 95 19:01:34 EDT Message-Id: <9505032301.AA12120@nova.npac.syr.edu> Received: from localhost.npac.syr.edu by nova.npac.syr.edu (4.1/N-0.12) id AA12120; Wed, 3 May 95 19:01:32 EDT To: furm, hariri@cat, miloje, gcheng, edlipson@syr.edu, roman, marek, dcs Cc: gcf, warzala, nora Subject: gcf: Current text Date: Wed, 03 May 95 19:01:32 -0400 From: warzala Hi all, Here's the text from the Collaborative Interaction and Visualization technical proposal that is being developed for Rome Lab. Each of you are responsible for contributing information to the sections identified below (approximately 1 page per section; only text at this stage). Hard copies of the formatted proposal will be provided tomorrow. Check your mailboxes. The required information must be submitted to Preston Marshall of Vanguard Research, Inc. by midnight on Friday, 5 April. If you will have a problem meeting this deadline or if you have any questions regarding the info you need provide, please contact Geoffrey to discuss this matter. Forward the required info to Vanguard at the following email address: marshall@cais.com. all - 2.8 wojtek - 1.3, 2.3.1, 2.3.6 salim - 2.2.1, 2.3.2, 2.3.3 miloje - 2.2.1 gang - 2.2.3 lipson - 2.2.4 roman - 2.3.2 marek - 2.3.5, 2.4, 3.4 dona - 3.2.1 Thanks, Steve ------- Forwarded Message Received: from nova.npac.syr.edu by spica.npac.syr.edu (4.1/I-1.98K) id AA13583; Wed, 3 May 95 17:05:55 EDT Message-Id: <9505032105.AA09429@nova.npac.syr.edu> Received: from localhost.npac.syr.edu by nova.npac.syr.edu (4.1/N-0.12) id AA09429; Wed, 3 May 95 17:05:53 EDT Reply-To: gcf@npac.syr.edu To: warzala Subject: Current text Date: Wed, 03 May 95 17:05:53 -0400 From: gcf ii.Table of Contents 1 iii.List of Illustrations and Tables 1 1. 0 OVERALL BACKGROUND 3 1.1 Introduction (VRI) 3 1.2 National Information Infrastructure and Communication/Network Trends (SYR) 3 1.3 Virtual Reality, Visualization and Collaboration Trends (SYR) 3 1.4 High Performance Computing Trends (SYR) 3 1.5 Military Needs (VRI) 3 2.0 TECHNICAL PROGRAM 5 2.1 Introduction (VRI) 5 2.2 The Application Drivers (SYR Review) 6 2.2.1 Real-time Interactive Distributed Weather Information System 6 2.2.2 Joint/Coalition Service C2 Information System 6 2.2.3 Electromagnetic Scattering Simulation System 7 2.2.4 Medical Collaboration and Visualization System 7 2.3 Core Technologies (SYR Review) 7 2.3.1 Virtual Reality, including its integration with the Web 7 2.3.2 Compression and network management 7 2.3.3 Collaboration technologies including simulated environments 8 2.3.4 Geographical Information Systems(GIS) 8 2.3.5 Parallel and Distributed Multimedia Information systems (Databases) 8 2.3.6: World Wide web technologies including VR and high resolution video support 8 2.3.7: Parallel and Distributed Computing including HPCC and CORBA issues. 8 2.4 Infrastructure (SYR Update) 8 2.5 Systems Integration (VRI) 9 2.6 Demonstrations (VRI) 11 2.7 Risk Analysis and Alternatives (VRI) 12 2.8 References (Both) 12 3.0 SPECIAL TECHNICAL FACTORS 14 3.1 Capabilities and Relevant Experience 14 3.1.1 NPAC (SYR Review) 14 3.1.2 Vanguard (VRI Review) 15 3.2 List of related government Contracts 16 3.2.1 NPAC (SYR) 16 3.3.2 Vanguard (VRI) 16 3.4 Infrastructure 16 3.4.1. Computational Resources at NPAC 16 3.4.2 Other Relevant Facilities 19 3.4.3 NYNET 20 4 SCHEDULE (VRI) 22 5: PROGRAM ORGANIZATION 23 5.1 Overall Structure (VRI) 23 5.2 Management and Technical Team (VRI) 23 5.2.1 NPAC (SYR) 23 5.2.2 Vanguard (VRI) 23 5.2.3 Other supporting Capabilities(University Support) (SYR) 23 PART II -- CONTRACTOR STATEMENT OF WORK (VRI) 1 1.0 OBJECTIVE 1 2.0 SCOPE 1 3.0 BACKGROUND 1 3.1 Guiding Principles and Theme 1 3.2 Infrastructure 1 3.3 Open Standards and Dual-Use 1 4.0 TASKS 2 4.1 Application and Technology Roadmap Thrust I 2 4.1.1 Technology Assesment 2 4.1.2 C3 Concept Development 2 4.2 Application Objects Thrust II 2 4.2.2 Joint/Coalition Service C2 Information System 2 4.2.3 Electromagnetic Scattering Simulation System 3 4.2.4 Medical Collaboration and Visualization System 3 4.3 Core Technologies Thrust III 3 4.3.2 Compression and network management 3 4.3.3 Collaboration technologies including simulated environments 3 4.3.4 Geographical Information Systems(GIS) 4 4.3.5 Parallel and Distributed Multimedia Information systems (Databases) 4 4.3.6 World Wide web technologies including VR and high resolution video support 4 4.3.7 Parallel and Distributed Computing including HPCC and CORBA issues. 4 4.4 Infrastructure and Systems Integration Thrust III 4 4.4.1 ATM and ISDN Networking Infrastructure including NYNET 5 4.4.2 Virtual Reality displays as clients on network at Rome(high-end) and NPAC 5 4.4.3 HPCC information and simulation servers 5 4.5 Demonstration and Systems Integration Thrust V 4.4.1 5 4.5.2 Assesment Planning 5 4.5.3 Assesment Performance 5 4.5.4 NYNET Demonstrations 5 4.5.5 National Demonstrations 5 4.6 Reporting and Management 6 4.6.1 Technical Reporting 6 4.6.2 DoD and Commercial Developer Liasion 6 4.6.3 Program Management 6 I. COVER PAGE ii.Table of Contents iii.List of Illustrations and Tables IV. EXECUTIVE SUMMARY Syracuse University will support the Collaborative Interaction and Visualization program through the integration of high performance computing and communications technologies and applications. This work shall include the use of the NYNET and associated parallel computer infrastructure as well as the Information Systems available to NPAC. These information systems include commercial offering such as parallel databases (DB2, Oracle) as well as Geographical and MultiMedia Information systems being developed at NPAC as enhancements to the common World Wide Web infrastructure. Further this support shall also include NYNET Applications, Enabling Technology research, and Distributed Computing systems, Multi-Media Networking, and Virtual Reality. Syracuse shall also provide the use of NPAC's Multi-Media Laboratory for both demonstrations and system development. A guiding principle in this work will be optimizing the interoperability and dual-use capabilities of systems. This will be implemented by using commercial hardware and software where possible and building other information storage, dissemination and visualization capabilities as well designed modules which can be integrated into Arpa, Rome, commercial and other pervasive information infrastructure. Standards used will include CORBA(Common Object Request Broker), WWW (World Wide Web) and the scalable software standards such as MPI HPF and HPC++ developed by the national HPCC community. Specific Components of Project The project will be centered on four applications which will be used to motivate and demonstrate the technology: A1)Real-time Interactive Distributed Weather Information System A2)Joint/Coalition Service C2 Information System A3)Electromagnetic Scattering Simulation System A4)Medical Collaboration and Visualization System These applications will be investigated with one major theme or guiding principle Th1)Collaboration and Visualization which will be used to guide the choice of technologies and the aspects of applications to stress. Note that we will implement each application as a separate component but linked into a single information system. For instance a C2 Information system(A2) would allow access to weather information(A1) explored with 3D visualization. The same decision maker can examine the 3D electromagnetic signature of a plane(A3). Alternatively after exploring the weather we can interact in real-time with a medical team in the field(A4) discussing a surgical operation. A common linked VR and Web interface is possible as we will base our information system on a GIS allowing 3D exploration of all spatially located components of the miltary decision support system. As a further theme we will use Th2)Training as this is also key to DoD and will ensure that our demonstrations and technologies can be easily transferred to operation as we can bootstrap effective training on the project components using project capabilities! We will use a conventional layered view of the future NII (National Information Infrastructure) with domain-specific applications built on top of generic services or "middle-ware". This project is application oriented and so we will only develop technologies as needed to support the applications in the context of chosen themes above. In fact the nature of today's open architectures such as the World Wide Web is that much of the technology activity will be integration and tuning of existing artifacts as opposed to developing new modules. Technology areas that we have identified as likely to be very important include: 1 Virtual Reality including its integration with the Web 3D displays will be used for both immersive and augmented visualization 2 Compression and network management to support transport of large images and animation(video) with high resolution 3 Collaboration technologies including simulated environments 4 Geographical Information Systems(GIS) with data fusion, planning, other overlays and 3D VR interface as needed to support applications. 5 Parallel and Distributed Multimedia Information systems (Databases) 6 World Wide web technologies including VR and high resolution video support 7 Parallel and Distributed Computing including HPCC and CORBA issues. This work will be built on top of HPCC infrastructure largely funded outside this project and in place; except the VR equipment at NPAC 1 ATM and ISDN Networking Infrastructure including NYNET 2 Virtual Reality displays as clients on network at Rome(high-end) and NPAC 3 HPCC information and simulation servers The final component will be a set of integrated demonstrations: Initially, these will include NYNET demonstrations on an ongoing basis, but will transition to more national and user involvement, based on National Demonstrations such as JWID using ATM connections through Rome Laboratory to LESN (Leading Edge Services Network) 1. 0 Overall BacKground 1.1 Introduction (VRI) 1.2 National Information Infrastructure and Communication/Network Trends (SYR) 1.3 Virtual Reality, Visualization and Collaboration Trends (SYR) 1.4 High Performance Computing Trends (SYR) 1.5 Military Needs (VRI) Data fusion and command and control systems for the military have successfully used the most advanced computer technology to enable real time information processing and tactical decision support for commanders and intelligence officers in the field. High performance computing and communications will revolutionize these key military systems in two ways. Directly, the base technology will allow dramatic improvement in C3I capabilities. Secondly, this technology will enable the National Information Infrastructure (NII), and give rise to major new consumer and business industries. Thus, the military application will indirectly gain from this civilian activity with a wealth of new dual use high performance networked multimedia products available for insertion in defense systems. These dual-use technologies and applications will follow the Global Grid-TENET model for linking a multi-use network infrastructure, a global NII, to theatre specific extensions. Directly analogous to military command and control applications, extensions from the NII are needed to support dual use applications. Air Force support of this project will enable us to generalize video on demand technologies developed here for national security applications to commercial (e.g., multimedia information systems for small businesses) and educational (e.g., The New York State Living Textbook project) purposes. We refer to the general environment as HPMMCC -- High Performance Multimedia Computing and Communications. In this proposal, we will evaluate and demonstrate video services enabled by HPMMCC in the context motivated by their use in a dual use decision support system. We believe that NYNEX, Rome Laboratory, and the InfoMall collaborators are well and in many ways uniquely positioned to take a leadership role in this area. NYNET is a high speed ATM network currently linking Rome Laboratory, Syracuse and Cornell with a state wide extension expected soon. The combination of NYNET, and the high performance parallel and distributed systems at NPAC and other NYNET sites, represent a good prototype of the future NII. Thus we are confident that we can scale results from our (NYNET) environment to lessons for the NII and the similar dual-use Global Grid military systems. The dual use decision support system should demonstrate support for Air Force and civilian applications. This includes crisis management (command and control); simulation and information on demand for education; city and regional planning support; and multimedia decision support for public administrators. NII software should be built in a modular fashion to allow optimization for particular applications and to enable the involvement of small businesses. This is the InfoMall model of virtual corporations, with different system components developed by different organizations, within a coordinated framework. The systems will be scalable so that they can be demonstrated on today's massively parallel systems, such as those on NYNET, but scaled in the future to the much larger systems which will be deployed as part of the NII. Supported clients should include personal computers as well as more powerful workstations. 2.0 Technical Program 2.1 Introduction (VRI) We will center the project on four applications which will be used to motivate and demonstrate the technology. These include: % Real-time Interactive Distributed Weather Information System % Joint/Coalition Service C2 Information System % Electromagnetic Scattering Simulation System % Medical Collaboration and Visualization System These applications will be investigated with one major theme or guiding principle -- Collaboration and Visualization which will be used to guide the choice of technologies, and the aspects of applications to stress. Note that we will implement each application as a separate component but linked into a single information system. For instance a C2 Information system would allow access to weather information(A1) explored with 3D visualization. The same decision maker can examine the 3D electromagnetic signature of a plane. Alternatively after exploring the weather we can interact in real-time with a medical team in the field discussing a surgical operation. A common linked VR and Web interface is possible as we will base our information system on a GIS allowing 3D exploration of all spatially located components of the miltary decision support system. As a further theme we will use Training as this is also key to DoD and will ensure that our demonstrations and technologies can be easily transferred to operation as we can bootstrap effective training on the project components using project capabilities! Key to the Syracuse University approach is the development of an integrated framework for which the applications and technologies developed in this program will be both derived and demonstrated, as shown below. While there are any number of technologies that are both in-use and evolving; the critical requirement for application to C3I systems is the ability to assess these technologies in a framework that best represents them in actual C3I applications, or their surrogates. Figure 2.1-1 Our Approach Provides for Integrated Technology, Application, Demonstration, and Interoperability Planning This approach enhances the Syracuse program, by not only developing the candidate technology and applications, but also initiating this development with an integrated perspective of how these technologies address real-world C3I issues, and targets their implementation into a form that will support demonstration and assessment in the context of the C3I problem. For example, the early identification of demonstration scenarios will ensure that interoperability requirements for the applications are addressed during the initial development, and are satisfied by the infrastructure and application development; prior to actual demonstration integration. 2.2 The Application Drivers (SYR Review) 2.2.1 Real-time Interactive Distributed Weather Information System Here we will identify an existing weather simulation which we will integrate with the Geographical Information system to allow 3D navigation through combined weather/terrain model so that visibility and other weather related issues are be explored. NPAC has experience with a Tornado simulation built at an NSF center (Oklahoma) and a NASA climate code. We will choose a suitable code in discussions with Rome Laboratory and the experts in the field. 2.2.2 Joint/Coalition Service C2 Information System As discussed above, all the applications will be structured as services available under a Web-based multimedia information system. We will work with Rome Laboratory in identifying a particular JWID oriented C2 demonstration. The subcontractor Vanguard will play a lead role here. The base multimedia information system will allow access to real-time digitized video streams based on technology work funded at NPAC by Rome Laboratory and a collaboration with the Newhouse and Maxwell Schools at Syracuse University. Excellent editing tools will use the leading edge AVID nonlinear digital editor integrated with our system. All other information modalities -- text,image and simulation will also be supported. 2.2.3 Electromagnetic Scattering Simulation System We will use an existing computational electromagnetic code based on work NPAC is aware of at Rome Laboratory, Syracuse Research Corporation or in EE dept. of University. This will be integrated into GIS and VR display system with help of world class faculty in the CEM area at Syracuse University. The application will be implemented as a dynamic time dependent simulation. 2.2.4 Medical Collaboration and Visualization System Here we will build on ongoing computational medicine activities on NYNET which involves SUNY Health Science Center, NPAC, University Physics dept. and NYNEX. Three identified areas are collaborative Telemedicine, VR based fly throughs of pathology images for advanced tumor identification and the Visible Human. The latter is an over 40 gigabyte database prepared by the NLM(National Library of Medicine) with better than millimeter resolution male and female 3D body images. NYNET and parallel servers are particularly relevant for such huge datasets and we will develop full immersive VR visualizations intially aimed at training and demonstration 2.3 Core Technologies (SYR Review) 2.3.1 Virtual Reality, including its integration with the Web 3D displays will be used for both immersive and augmented visualization. We will examine VRML as natural interface as this is an open standard built on SGI Inventor technology and is hoped to become the "three dimensional HTML" standard. The applications and base technologies(T3,T4,T6) naturally will drive this technology effort. We will implement volume visualization methods integrated with medical collaboratory systems that will be capable to construct, display, animate, and manipulate 3D visualization of certain medical diagnostic data, like CAT scans. 2.3.2 Compression and network management Compression and network managementare critical to support transport of large images and animation(video) with high resolution. The very demanding VR and collaborative applications will stress the network management and require the best compression. We will focus on methods with both high quality and fast encoding and decoding. The most promising technology, wavelets, will be explored and integrated in real time systems meeting the stringent requirements of VR and medical diagnostics, for both still high-resolution images and video. Wavelets appear to require four times less bandwidth than JPEG to record comparable quality images. We will also explore and integrate the most recent networking technologies, including ATM, ISDN, and ADSL for image delivery using the novel network management technologies ensuring high network utilization via integration of the ATM-level flow control mechanisms. 2.3.3 Collaboration technologies including simulated environments Here we will build on results of a Rome Laboratory funded project evaluating several collaboration software and hardware products. We will concentrate on the interoperability issues for different industrial standards and products and on broadening the scope of the applications that can be shared over the network by multiple users. Further we note that lower bandwidth may be used for collaboration over ISDN which is integrated into NYNET at NPAC. At the high-end we will evaluate an Argonne project using an SP-2 to support full 3D virtual environments based on the MOO paradigm 2.3.4 Geographical Information Systems(GIS) We will use a Geographical Information Systems(GIS) with data fusion, planning, other overlays and 3D VR interface as needed as the basic spatial interface supporting full 3D terrain navigation in immersive or augmented VR modes. This will build on State funded work for the Living Textbook project. An operational military GIS will need many spatial reasoning, image processing and data fusion overlays. We will select some typical examples appropriate to support chosen demonstrations. We request Rome Laboratory's support in obtaining DMA digital terrain and elevation data as this is of higher resolution that that available through commercial channels at reasonable price. 2.3.5 Parallel and Distributed Multimedia Information systems (Databases) We have discussed this under A2) above. Parallel, high-performance data repositories will be integrated with all information systems built in the current project. In particular, we will use parallel database technology integrated with the web technology to ensure rapid access and retrieval of all kinds of medical records including medical imagery. We also expect to integrate ARPA funded concept-based text retrieval software from the University's IST school which will allow the commander (decision-maker) to more effectively analyse both text and text-indexed video. 2.3.6: World Wide web technologies including VR and high resolution video support The proposed applications represent the very high end of existing Web video services but we expect that new Web standards such as VRML (Virtual Reality Modelling Language for common 3D datastructures) and Java (supporting general client applications) combined with parallel servers will allow us to use a Web-based approach to this project. 2.3.7: Parallel and Distributed Computing including HPCC and CORBA issues. We will use parallel and distributed computing to support simulations using scalable software systems. We will also work with national parallel C++ community to integrate CORBA into the HPCC framework. 2.4 Infrastructure (SYR Update) A key advantage of this proposal is the excellent existing infrastructure funded largely by the State of New York outside this proposal. The project infrastructure is described in detail in section 3.4, but key components including areas where project specific enhancement is needed are: % ATM and ISDN Networking Infrastructure including NYNET This is in place between NPAC and Rome Laboratory but we will greatly benefit from bridging of NYNET to national networks at Rome and extension of ATM capability to SUNY Health Science Center and the several collaborating University departments. % Virtual Reality displays as clients on network at Rome(high-end) and NPAC Here we require expansion of the University's capability and we will build a modest SGI based system that can prototype and demonstrate applications that can use Rome Laboratory equipment for their full exploration. % HPCC information and simulation servers NPAC already has modest size parallel computers including a IBM SP-2, nCUBE2, CM5 and Intel touchstone. We hope to coordinate and collaborate with the nCUBE activity at Rome laboratory where for instance interesting speech recognition work of relevance is being performed. This can be used to produce good indexing of video material. As was shown in Figure 2.1-1, during the integration planning we will identify interoperability requirements, based on our planned demonstrations. This effort will ensure that this level of integration is available to support the planned activities. The periodic demonstrations that are planned with Rome Labs will provide ongoing validation of the success of this effort. 2.5 Systems Integration (VRI) The integrated approach is keyed to an evaluation and demonstration process. Our integrated process ensures that many of the issues that complicate technology demonstrations have been addressed early in the program, and are reflected in ongoing effort. Specifically, the following t key inputs will have been addressed: 1. What C3I requirements or issues should be addressed in this effort? 2. How are the technologies to be investigated in this effort responsive or supportive to these requirements? and 3. How can these success/failure of these approaches be assessed to provide insight into subsequent exploitation or enhancement? Typically technology and application programs develop capabilities, and then, after completion determine appropriate mechanisms for assessment and demonstration. In a departure from this approach, we will integrate our planning from the initiation of the effort. During this integration planning, we will ensure that: 1. The applications and technologies are meaningful, not only as standalone examples of technology, but that when integrated, they form a synergistic capability that is demonstrable and cohesive, and 2,. The development of these capabilities is targeted and focused on an integrated evaluation process that is directly linked to C3I issues that can be demonstrated in the context of real world problems. The evaluation process we will perform is also a key to utilizing and integrating the Rome Laboratory experience and knowledge of C3I systems into the effort at the earliest possible time. The road map we will establish will provide a joint Syracuse/Rome Lab vision of the program, the objectives, schedules, and most importantly, a set of tangible demonstration products, and adequate lead time to enable them to be integrated into other Rome activities, such as JWID exercises. The top level integration process is shown below. Figure 2.5-1 Our Planning Process Provides End-to-End vision of the Entire Effort As shown in the Figure, the integration process is our vehicle integrate the technology, the C3I objectives, and demonstration concepts before performing the bulk of the effort. Based on the applications we have selected for development, we will select a subset of C3I objectives to represent meaningful C3I capabilities that are defined in terms of C3I missions, rather than the implementing technologies. These objectives include both specific functionality (e.g. information display capabilities, embedded training, decision support and visualization in 3 Dimensional problems, such as in a JFACC or ADTOC, etc.) or system attributes, (e.g. interoperability, survivability, security, etc.. C3I objectives will be compared with the technology capabilities to determine which objectives will be addressed by which technologies, and then specific applications will be identified for each of these. This 3 dimensional matrix of objectives, technologies and applications will be the basis for our ultimate demonstration planning, which will develop specific approaches to meaningfully demonstrating these capabilities in as realistic and representative and environment as is possible. By identifying the "ultimate" objective, we will be abler to identify the minimal level of infrastructure interoperability that can be accepted. We will identify these requirements and reflect them in the ongoing infrastructure development effort. Additionally, this integrated planning process provides a formal and meaningful opportunity for incorporating Rome Labs guidance that will leverage this effort through this end-to-end planning. Although we have described this planning process in terms of its "up front" integration functions, we envision it as ongoing process. We will feed back the results from the standalone application and technology development efforts into this process to adjust objectives, and respond to emerging opportunities to better exploit the development and demonstration results. We will review and update this integration planning will be at least every 3 months during this effort. 2.6 Demonstrations (VRI) The ultimate objective of this effort is not to just develop a large set of standalone applications and technologies, but is to provide an integrated capability that can demonstrate their contribution in terms of realistic problems. In our early planning, we have already identified: 1. How we plan to integrate these capabilities, 2. Interoperability requirements that must be met to support the demonstration concept, and the 3. Application and technology integration that will be required The remaining requirement is to determine and plan for appropriate venues for providing product visibility. This will be conducted early in the integration effort based on resources and opportunities available to both Rome and the Syracuse Team. Candidate opportunities include planned Joint Warrior Interoperability Demonstrations (JWID) exercises, TACSSF exercises that are targets of opportunity, ARPA HPCC demonstrations, as well as related DISA Global Grid, Global Command and Control System (GCCS) or industry forums. Selection of specific demonstration opportunities will be performed jointly with Rome Labs. Our objectives in these demonstrations is to address the key issue of which technologies and applications offer unique benefits to C3I systems, and to C3I implementors. Our demonstration program therefore is geared to provide a means of providing low cost filtering of many candidate approaches. Additionally, we recognize that C3I evolution is driven by both technology and user requirements. The C3I technologies that will succeed in the next century are those that have strong user/operator community support and advocacy. Our objective in these demonstrations is not only to achieve a "yes/no" input regarding specific products or features, but to also provide a credible and meaningful product that will enable the research and user communities to jointly advocate continuation of unique or desirable technologies. In particular, we plan to concentrate the effort during the latter phases of this effort to enhancing our capability for selected behaviors to as level that will support user interaction and advocacey. Early demonstrations will be organized with Rome Labs and will be primarily project level events. We will solicit Rome Labs inputs at these early demonstrations, and will progressively enhance the product until the level of confidence and credibility is achieved. During the early phases of the effort, we will perform NYNET Demonstrations on an ongoing basis, especially between Rome Laboratory and NPAC. We plan formal 6 monthly major events. We expect an ongoing set of NYNET demonstrations including those involving the Living Textbook. We will feature the results of this project whenever possible. This could require mobile lower end VR equipment for demonstrations that are not at NPAC Cornell or Rome. Further we expect to arrange deliverables so that they can be demonstrated in three major project demonstratons at six month time intervals. 2.7 Risk Analysis and Alternatives (VRI) A major area of uncertainty is the area of the World Wide Web where technology and capabilities are evolving with amazing rapidity. Just over the last month, a major new capability -- Sun's Java system -- has emerged while VRML which is central to our effort has been clearly important for some time. However only recently has it become clear that browsers will be available supporting this. This rapid change does make the planning process quite hard but these changes can only make the project better. We will follow new developments very closely to ensure that project does not use outdated approachs. 2.8 References (Both) 1993. G. Fox, S. Hariri, R. Chen, K. Mills, M. Podgorny. InfoVision: Information, Video, Imagery, and Simulation On Demand. Syracuse Center for Computational Science Technical Report-575. 1993. K. Mills and G. Fox. HPCC Applications Development and Technology Transfer to Industry. in Postproceedings of The New Frontiers A Workshop on Future Directions of Massively Parallel Processing, IEEE Computer Society Press, Los Alamitos, CA. 1994. K. Mills and G. Fox. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development. to be published in electronic networking: research, applications, and policy. 1994. G. Fox and K. Mills. Opportunities for HPCC Use in Industry: Opportunities for a New Software Industry. chapter to be published in Parallel Programming: Paradigms and Applications. Chapman and Hall Publishers, London. U.K. A. Zomayae ed. SCCS- 617. 1994. G. Fox and K. Mills. High Performance Computing and Communications--Yet another revolution in education. 4th Annual 1994 IEEE Mohawk Valley Section Dual- Use Technologies and Applications Conference. 1994. G. Fox and K. Mills. A Classification of Information Processing Applications: HPCC Use in Industry. 4th Annual 1994 IEEE Mohawk Valley Section Dual-Use Technologies and Applications Conference. 3.0 Special Technical Factors 3.1 Capabilities and Relevant Experience 3.1.1 NPAC (SYR Review) Professor Geoffrey Fox is currently a Professor of Computer Science and Physics, and Director of NPAC (Northeast Parallel Architecture Center) at Syracuse University. He was a Professor of Physics at Caltech from 1972-1990. He also held the positions of Executive Officer of Physics, Dean for Educational Computing and Associate Provost for Computing at Caltech. Prior to his tenure at Caltech, he has held various positions at Institute for Advanced Studies at Princeton, Lawrence Berkeley Laboratory, Cavendish Laboratory, Cambridge, England, Brookhaven National Laboratory, and Argonne National Laboratory. He has led a team which developed the initial use of hypercube in a large variety of scientific fields and has expanded study to other supercomputer architectures. His expertise is in concurrent algorithms and software. He has published over 250 research papers in journals and conferences in the areas of high energy physics , parallel computers, application of parallel computers for computational science. At Syracuse, he leads the ACTION program aimed at integrating parallel computers into industry. He is part of the NSF science and Technology Center CRPC (Center for Research in Parallel Computation led by Ken Kennedy). He has led the Syracuse group developing the Fortran 90D (High Performance Fortran) compiler. He is also the chair of the ARPA Parallel Compiler Runtime Consortium. Dr. Marek Podgorny, Ph.D., Associate Director for Technology, NPAC and InfoMall. Graduated with MSc in experimental physics in 1973 from Jagellonian University, Cracow, Poland. Obtained Ph.D. in computational physics in 1978 and habilitation (higher doctoral degree) in theoretical physics in 1992, both degrees from Jagellonian University, Cracow, Poland. Postdoc on Nijmegen University, Holland, 1979, and in National Institute of Nuclear Physics in Rome, Italy, 1980. Alexander von Humboldt Fellow in 1983-1984, Visiting Professor on the Universities in Dortmund and Bochum, Germany, 1985-1986, and 1990-1991. Director of the Jagellonian University Computing Center 1987-1989. Sr. Researcher in NPAC 1992, Associate Director since 1993. Supervises parallel database benchmarking project for ASAS 1992-1993 (funding: $370K). Administers New York State hardware grant for NPAC, 1993-1996 (funding: $6M). Granted Priority Worker status in category of Outstanding Researchers by INS. Authored more than 50 research papers in theoretical and computational physics and in computer science. Dr. Kim Mills, Ph.D., Associate Director for Special Projects, NPAC and InfoMall. Graduated in 1988 from State University of New York College Environmental Science and Forestry with Ph.D. in environmental science. Technical Specialist at the Cornell Theory Center from 1988-1990, Research Scientist at NPAC from 1990-1993, Associate Director of NPAC from 1993. Coordinates NPAC's InfoVision demonstration projects, develops InfoMall industry projects in finance, information services, NPAC project leader in environmental modeling (NASA HPCC Grand Challenge data assimilation, IBM funded acid deposition modeling, tornado simulation modeling with University of Oklahoma), education (AskERIC educational database, New York State Living Textbook), Communitynet (Community Network of Central New York). Authored 25 research papers on application issues in computational science and developing HPPC applications in industry. Project leader for the financial modeling application demonstrated over NYNET for the October 25, 1993 House Space and Science Subcommittee meeting at Rome Laboratory. Professor Salim Hariri received a BSEE, with distinction, from Damascus University, Damascus, Syria, in 1977; an MSc from The Ohio State University, Columbus, Ohio, in 1982; and a Ph.D. in computer engineering from the University of Southern California in 1986. Currently, he is an Associate Professor in the Department of Electrical and Computer Engineering at Syracuse University. He has worked and consulted at AT &T Bell Labs in designing and evaluating reliable distributed computing environments between 1989-1992. His current research focuses on high performance distributed computing, high speed networks design, expert systems for managing performance of high speed networks, benchmarking and evaluating parallel and distributed software tools, software development methodology for heterogeneous high performance computing systems, and developing a performance evaluator of application performance in parallel/distributed computing environments. He has co-authored over 50 journal and conference papers and he is the author of a book ``High Performance Distributed Computing: Concepts and Design'' to be published by Artech House Inc, in 1994. Professor Hariri is the project leader for the benchmarking project of parallel and distributed software tools for BMC3I applications and the virtual machine project sponsored by Rome Laboratory. Dr. Roman Markowski, Ph.D., Researcher, NPAC. Graduated with MSc in theoretical physics in 1980 from Jagellonian University, Cracow, Poland. Obtained Ph.D. in computational physics from the same university in 1993. Deputy Director of the Jagellonian University Computing Center 1987-1990, Director 1990-1993. Since 1993 in NPAC as Researcher and Project Leader. 3.1.2 Vanguard (VRI Review) Vanguard Research, Inc. (VRI) is uniquely qualified to support Syracuse in integration, demonstration planning and execution and open system standards. VRI 's qualifications arise from two specific roles. In the area of evolving software standards, VRI has been a key player in the Ballistic Missile Defense Organization's (BMDO) initiatives in new and innovative approaches to software development. For example, VRI is a member of the Information Architecture, which is one of the largest Object Oriented BM/C3 analysis ever performed. VRI supported the BMDO Options Assessment contracts, which centered on new technologies and concepts for the development of C3 systems. VRI is currently involved in the ongoing BM/C3 / System Engineering and Integration acquisition. VRI also supports BMDO for representation on the DoD Open Standards policy boards. In related activities, VRI has supported C3 related projects including development of TCP/IP and OSI-based network End-to-End Encryption devices (WINDJAMMER), Ground Entry Point engineering, analysis of terrestrial and MILSATCOM networking protocols, Integration of C3 functionality into existing USSTRATCOM C2 systems and a large number of C3-related studies and analysis for USSPACECOM and AFSPACE in Colorado Springs. VRI is also the SETA to the Air Force Space Command National Test Facility (NTF). This facility is a key player in Air Force war gaming and simulation efforts, and as host to the Space Warfare Center (SWC) Cheyenne Mountain Test and Training System (CMTTS) and roles in support of Cheyenne Mountain Complex (CMC) and Cheyenne Mountain Upgrade (CMU) software programs. As NTF SETA, VRI has been directly involved in the last JWID, ongoing TACSSF exercises, and an ever increasing number of demonstration and war game activities. VRI is also directly involved in NTF's role as a major Defense Simulation Internet (DSI) node. Due to this familiarity, VRI will be able to support this effort in the area of technology demonstration and user assessment planning. It brings familiarity with the major exercise participants, and in-depth knowledge of organizing and executing these critical activities. VRI's program manager for this effort, Mr. Preston Marshall, is recognized as an expert in evolving C3 systems. As General Manager of VRI, he has been involved in all VRI activity in the area of C3, and has hands on experience supporting BMDO in the areas of software development, system standards and demonstration planning. He was part of the BMDO team that planned the BMDO/NTF participation in JWID 94, where an object passing architecture based on Object Store and commercial infrastructures was demonstrated. 3.2 List of related government Contracts 3.2.1 NPAC (SYR) NPAC has several Government contracts studying HPCC technologies generally related to this proposal. However the directly relevant contracts are: NSF Cooperative Agreement No. CCR-9120008 (prime contractor is Rice University) (Center for Research in Parallel Computation) CRPC/Fortran Programming System Project $1,052,390 - 7/1/90-1/31/94 with at least two more years expected. Rome Laboratory (AFMC) F-30602-93-C-0195 Applications and Enabling Technology for NYNET Upstate Corridor $99,999 - 8/9/93-4/8/94 Rome Laboratory (AFMC) F-30602-92-C-0063 Software Engineering for High Performance C3I Systems: Parallel Software Benchmarks for BMC3/IS $95,000 - 4/7/92 - 2/6/93 Rome Laboratory (AFMC) F-30602-92-C-0150 Virtual Machine Model Definition (with Berkeley & Purdue) $95,000 - 8/31/92 - 2/28/93 3.3.2 Vanguard (VRI) 3.4 Infrastructure 3.4.1. Computational Resources at NPAC NPAC computational resources offer a variety of most recent HPCC technologies. Although the structure of installed facilities is not geared toward high volume production needs, the Center offers considerable computing power. The real asset of the Center is diversity of resources. NPAC's technology expertise is as diverse as its infrastructure. The systems installed at NPAC provide testbeds for a multitude of computationally intensive projects, and can be broadly divided into 4 categories: % Parallel supercomputers % Clusters of high-performance workstations % Desktop workstations for program development and data visualization % Networking infrastructure In addition to internal resources, NPAC has access to other HPCC sites, including CRPC sites in Argonne, Caltech, and Los Alamos. These sites offer large scale production machines that can be used to run the code developed locally on NPAC's facilities. The parallel machines currently installed in NPAC include a CM5, an iPSC 860, two DECmpp machines, an SP1, and an nCUBE 2. They represent the major current architectural trends for MPP platforms. The configuration and utilization data for these machines are as follows: The CM5 is a 32 node parallel system from Thinking Machines Corporation. Every node has 32MB of memory, a SUN Sparc 1 scalar CPU, and four vector units. The nodes are connected by TMC's proprietary network with a quad-tree topology. The peak computational power of our CM5 is estimated at 3 GFLOPS. The CM5 is used in a number of ways at NPAC: it supports educational projects, serves as a compute server for a number of research projects, both internal and external, and for experiments in distributed computing. The iPSC 860 from Intel is a 16 node parallel system with 256 MB of RAM total, and a hypercube interconnect topology. It features a small scale concurrent file system. The iPSC is used as the main resource for NPAC's High Performance Fortran development project. Two DECmpp machines are large SIMD systems with 8K and 16K processors respectively, and operate in a data parallel model. DECmpp machines are equipped with a high speed I/O channel, and can communicate over a HiPPI interface. This architecture makes them well suited for regular problems. The aggregate peak computing power of the DECmpp ensemble is approximately 2.5 GFLOPS. These machines are used in computational science projects, but are also ideally suited for image processing and data compression applications. The DECmpp platforms offer relatively sophisticated and user friendly software support. The SP1 machine is the newest HPCC product of the IBM Corporation. Unlike other machines installed in NPAC, the SP1 consists of a relatively small number of very powerful nodes (16 at present, with a maximum of 64 nodes). The node processor of the SP1 is an IBM RISC/6000 processor running at 62.5 MHz, one of the more powerful RISC processors available. Internal connectivity of the SP1 is provided by both a double Ethernet network, and a high-performance switch with a crossbar topology (full connectivity). The parallel processing environment of SP1 supports both distributed (loosely coupled) and parallel (tightly coupled) programming paradigms. Providing binary compatibility, the SP1 can be easily integrated with IBM workstation clusters (see below). This recently installed machine is used to develop software for Grand Challenge applications, including environmental and financial modeling. It is a flexible, general purpose platform with considerable computing power, supporting a variety of projects. The nCUBE 2 installed at NPAC is a 64 node, 2 GB RAM machine with a MIMD hypercube architecture. The salient feature of the NPAC machine is its scalable parallel I/O subsystem. The subsystem, working in parallel with the computational processor array, consists of 32 nodes forming its own hypercube, with a SCSI controller and a disk array connected to every node. The current configuration supports 96 disks with total storage capacity of nearly 0.2 TB. The I/O subsystem supports also networking processors and controllers. This design represents the state-of-art scalable I/O system, probably the most advanced one in the industry. The nCUBE platform in NPAC supports the Parallel Oracle Database Server, a high performance, high capacity relational database management system. At present, this system is mainly being used for parallel database performance evaluation projects. It however possesses capability to support large, high performance data processing applications and can serve as a testbed for pilot projects involving mission and time critical decision support applications in a number of fields, including finances, health care, GIS, and as multimedia server. The distributed computing facilities in NPAC include two clusters of high performance workstations: IBM R/6000 cluster and DEC Alpha cluster. The IBM heterogeneous cluster consists of 8 workstations (models 550 and 340). Four of the workstations are connected via a high bandwidth, low latency Allnode switch. The IBM cluster is intensively used by a number of projects in computational physics and chemistry and by the projects funded by NPAC industrial partners. Shortly, the cluster will be integrated with the SP1 platform. Both SP1 and cluster nodes will run the same Parallel Programming Environment software and Load Leveler scheduler. This integration will almost double the CPU power of SP1, presenting the users with single system view of both the SP1 and the cluster. The DEC Alpha cluster consists of 8 Alpha model 400 compute servers. These workstations represent very recent development of workstation technology. The cluster is supported by a high performance networking backbone consisting of dedicated, switched FDDI segments. This solution provides full FDDI bandwidth and low latency switching to every workstation in the cluster. The model implemented at NPAC exemplifies Digital's latest approach to distributed computing. Together with DECmpp machines, the cluster is used to demonstrate viability of using highly heterogeneous computing environment to solve computational problems with very irregular structure. Both clusters of workstations provide NPAC users with unique distributed processing environment. The clusters support a growing class of computational problems that are both irregular (hence difficult to parallelize) and non-vectorizable (hence inefficiently running on vector supercomputers). NPAC researchers use heterogeneous networks of desktop workstations for everyday operation including program development, document processing, multimedia, and scientific visualization. Scientific visualization capability is also available for the IBM SP1/workstation cluster. All NPAC facilities are networked. The main networking FDDI backbone is built around the Gigaswitch (Digital), a high speed, low latency multiswitch with interfaces to both FDDI and ATM networking technology. A section of the Gigaswitch supports the networking of the Alpha cluster. Other sections of the same device support access to NPAC's file servers and provide connection to the router supporting Ethernet connectivity. Gigaswitch is also used to interface internal NPAC LANs to NYNET. NPAC will shortly install the second networking backbone based on the HiPPI technology. At present, the two DECmpp machines have the HiPPI interfaces. This single point-to-point connection will be extended by adding a switch, HiPPI interfaces to other parallel platforms, and a HiPPI-to-ATM interface allowing for long distance high speed communication. The HiPPI interface supports high bandwidth applications in NPAC. In the near future this capability will be extended to support geographically distributed applications requiring gigabit bandwidth technology. As seen from the above review, NPAC's facilities represents a state of art HPCC center providing its partners with access to the newest technology. The unique capabilities of NPAC computational infrastructure offers a number of benefits: A number of different parallel architectures are gathered in one place. This allows for extensive experiments and for identification of the most suitable architecture for a particular project. In addition to diverse parallel systems, loosely coupled distributed systems are supported as well. This extends the overall center capability and allows for integration of the technologies required to solve extremely irregular problems. The integration capability is supported by the efficient networking backbone. This capability will be extended soon to applications running on geographically remote computers by interfacing NPAC internal LAN to NYNET high speed network. NYNET will also support applications that require very high bandwidth for data exchange between supercomputers. Both traditional ``number crunching'' and data processing platforms are available. This allows us to test distributed applications that have both intensive data processing and computational components. Both database applications (parallel database server) and I/O intensive applications (parallel I/O subsystems) are supported. NPAC's heterogeneous environment offers considerable computing power to NPAC partners and users of its facilities. 3.4.2 Other Relevant Facilities NYNET will link NPAC facilities to those at Rome Laboratory and Cornell. Initially, this link with be based on two OC3 lines, but over the next two year will provide multigigabit performance. At NPAC, a dedicated ATM cluster is directly connected to the NYNET OC3 links. This cluster consists of three Suns, three SGI Indy workstations, and a SGI Challenge Network Serve, with ATM connectivity provided by a FORE ASX- 100 switch. All workstations in the cluster have full video conferencing capability, and digital video capture, compression, and retrieval support. The cluster will provide hardware platform for evaluation of both commercial and public domain collaboratory packages. Two particularly interesting architectures at Cornell are the large (currently 64-node) IBM SP-1 and the 128-node Kendall Square KSR-1. The latter, with its innovative ALLCACHE architecture, is a promising providing a scalable shared memory architecture. We also have access to a wide variety of machines through our national collaborators at CPRC, the Army Research Center at Minnesota, NSF Supercomputers centers, and DoE laboratories. In particular, we have access to much larger configurations of the systems installed at NPAC. Our strategy is to develop and test concepts in-house and then use the national supercomputers to test scaling. CRPC's facilities at Caltech include the Delta Touchstone at Caltech (528-nodes soon to be upgraded to a Paragon), 1024-node CM-5 at Los Alamos and the large 512-node SP-1 at Argonne. 3.4.3 NYNET The backbone of the network is comprised of broadband ATM switches interconnected via SONET trunks. In the NYNET testbed, these trunks will range from OC-3 (155 Mbps) to OC-48 (2.488 Gbps) data rates. To provide unified network management and control capabilities across the public network, the switches are interfaced with the ATM Forum specified Public NNI (Node-to-Node Interface) and an OSI CMIP-based network management system is operated. The NYNET participants are provided access to the ATM backbone via 2 OC-3c SONET links per siteThe Syracuse Museum of Science and Technology will attach to the network via a single OC-3c link.. The purpose of providing two links per site is as follows: To provide each site with larger than OC-3c bandwidth prior to the availability of standards compliant OC-12c public network equipment, To permit a user to run experiments in which traffic runs through the network and back to his/her own site. This eases the development of protocols by permitting experimenters to observe and control both ends, and To test techniques of inverse multiplexing to achieve throughputs that are greater than any single link can provide. The interface between the participant sites and the public network will conform to the ATM Forum specified UNI (User-to-Network Interface), and an SNMP-based customer network management system will be developed. This management system will allow the users to monitor and collect network configuration and traffic statistics, as well as providing limited end-to-end management and control capabilities. To interconnect the New York upstate participants to the downstate participants (different LATAs) requires a NYNET interface to a third party network. This can accomplished by means of the ATM Forum specified B-ICI (Broadband Inter-Carrier Interface). The same interface will also be used to connect NYNET to the national computing environment network. The NYNET public network provides permanent virtual circuit (PVC) cell relay service (CRS). In the NYNET PVC CRS, virtual circuits are provisioned based upon the peak user burst rate, which may be specified as large as the line access rate (OC-3 in this case). However, the gain due to statistical multiplexing and the bursty nature of data traffic permits many PVCs, each with the line rate bandwidth, to share a single link with minimum cell loss. Studies based upon Poisson traffic statistics indicate that an ATM statistical multiplexing gain of 2-4 can be achieved with cell loss rates as low as 10-9. Each NYNET user link will be assigned enough PVCs to provide full mesh logical connectivity between all sites. Additionally, each traffic mode supported on a path may be provided its own PVC -- in this way HSM traffic would traverse PVCs allotted the full 155 Mbps while NSM traffic would traverse PVCs allotted no more than 45 Mbps. At NYNET participant sites, there are three general local topologies: Direct, dedicated CRS access -- A port on the public ATM switch is dedicated to a single device such as a supercomputer, MPP machine, or a video/image server. Currently, this type of access is not available with commercial products, but we expect that direct OC-3c SONET and HiPPI/OC-12c interfaces to high performance computers and servers will be available in 1994. Direct CRS access via local ATM networks -- An ATM LAN switch provides direct 100- 150 Mbps ATM access to workstations via TAXI and OC-3c. This architecture allows legacy networks to gain access via routers using LAN bridge interfaces (Ethernet, FDDI) available on many LAN ATM products. Indirect CRS access through routers -- Routers equipped with DS-3 (45 Mbps) HSSI interfaces can use the ATM DXI protocol to route legacy LAN traffic across the ATM network. This architecture requires an ATM DSU which provides the DXI to ATM protocol conversion. DSUs equipped with a concentrating (muxing) function permit several routers to share the OC-3c link. Any particular NYNET participant site is likely to have a hybrid of these topologies, and the explosive growth of ATM products leads to continually changing topologies. Currently, NYNEX expects to partner with a long distance carrier to link upstate to downstate during calendar year 1994. Other links such as to Boston are also planned in the near future. Rome Laboratory links us to Defense networks which span the nation from California to Washington D.C. bitways. 4 Schedule (VRI) 5: Program Organization 5.1 Overall Structure (VRI) 5.2 Management and Technical Team (VRI) 5.2.1 NPAC (SYR) 5.2.2 Vanguard (VRI) 5.2.3 Other supporting Capabilities(University Support) (SYR) Part II -- Contractor Statement of Work (VRI) 1.0 Objective This project will investigate Collaborative Interaction and Visualization using HPCC technologies in the context of four applications. We will use the excellent infrastructure and expertise available at both NPAC and Rome Laboratory including the high speed network NYNET, links to national networks and NPAC's high performance multimedia parallel systems linked to NYNET. The subcontractor Vanguard will enable the work to be closely integrated with both commercial standards such as CORBA and the Department of Defense application and interoperability needs. 2.0 Scope 3.0 Background 3.1 Guiding Principles and Theme The project will be application driven but investigated in the context of one major theme or guiding principle Collaboration and Visualization which will be used to guide the choice of technologies and the aspects of applications to stress. Note that we will implement each application as a separate component but linked into a single information system. As a further theme we will use Training as this is also key to DoD and will ensure that our demonstrations and technologies can be easily transferred to operation. 3.2 Infrastructure This work shall include the use of the NYNET and associated parallel computer infrastructure as well as the Information Systems available to NPAC. These information systems include commercial offering such as parallel databases (DB2, Oracle) as well as Geographical and MultiMedia Information systems being developed at NPAC as enhancements to the common World Wide Web infrastructure. Further this support shall also include NYNET Applications, Enabling Technology research, and Distributed Computing systems, Multi-Media Networking, and Virtual Reality. Syracuse shall also provide the use of NPAC's Multi-Media Laboratory for both demonstrations and system development. 3.3 Open Standards and Dual-Use A guiding principle in this work will be optimizing the A guiding principle in this work will be optimizing the interoperability and dual-use capabilities of systems. This will be implemented by using commercial hardware and software where possible and building other information storage, dissemination and visualization capabilities as well designed modules which can be integrated into Arpa, Rome, commercial and other pervasive information infrastructure. (World Wide Web) and the scalable software standards such as MPI HPF and HPC++ developed by the national HPCC community. 4.0 Tasks 4.1 Application and Technology Roadmap Thrust I This thrust provides the frontend assesment of technologies and applications which will be needed for the project. This thrust defines the necessary functionality needed in application and technology thrusts II and III. There are companion backend tasks which take the identified thrusts II and III components and integration them into demonstrations 4.1.1 Technology Assesment This involves the assesment and identifiaction of technologies, such as World Wide Web protocols, massively parallel applications, virtual reality, distributed computing and communication concepts that have application to the development of modern C3 Systems. This will be done recognizing the four target applications and the underlying themes of collaboration, visualization and training. 4.1.2 C3 Concept Development We will identify key C3 concepts and objectives that should be used to guide the development of technologies and application objects. This task will interact with 4.1.1 as we evaluate concepts that are consistent with identified technologies and application areas. Where possible, existing service requirements or technology needs analysis (such as the Air Force Science Advisory Bard) shall be incorporated. 4.2 Application Objects Thrust II This thrust consists of the tasks building application component objects which will be linked together according to principles identified in task I and developed in the demonstration and integration thrust IV 4.2.1 Real-time Interactive Distributed Weather Information System Here we will identify an existing weather simulation which we will integrate with the Geographical Information system to allow 3D navigation through combined weather/terrain model so that visibility and other weather related issues are be explored. NPAC has experience with a Tornado simulation built at an NSF center (Oklahoma) and a NASA climate code. We will choose a suitable code in discussions with Rome Laboratory and the experts in the field. 4.2.2 Joint/Coalition Service C2 Information System As discussed above, all the applications will be structured as services available under a Web-based multimedia information system. We will work with Rome Laboratory in identifying a particular JWID oriented C2 demonstration. The subcontractor Vanguard will play a lead role here. The base multimedia information system will allow access to real-time digitized video streams based on technology work funded at NPAC by Rome Laboratory and a collaboration with the Newhouse and Maxwell Schools at Syracuse University. Excellent editing tools will use the leading edge AVID nonlinear digital editor integrated with our system. All other information modalities -- text,image and simulation will also be supported. 4.2.3 Electromagnetic Scattering Simulation System We will use an existing computational electromagnetic code based on work NPAC is aware of at Rome Laboratory, Syracuse Research Corporation or in EE dept. of University. This will be integrated into GIS and VR display system with help of world class faculty in the CEM area at Syracuse University. The application will be implemented as a dynamic time dependent simulation. 4.2.4 Medical Collaboration and Visualization System Here we will build on ongoing computational medicine activities on NYNET which involves SUNY Health Science Center, NPAC, University Physics dept. and NYNEX. Three identified areas are collaborative Telemedicine, VR based fly throughs of pathology images for advanced tumor identification and the Visible Human. The latter is an over 40 gigabyte database prepared by the NLM(National Library of Medicine) with better than millimeter resolution male and female 3D body images. NYNET and parallel servers are particularly relevant for such huge datasets and we will develop full immersive VR visualizations intially aimed at training and demonstration. 4.3 Core Technologies Thrust III This thrust develops and identifies the core technologies necessary for the application objects and demonstrations. The technology requirements will be identified in thrust I and in thrust IV we put integration into demonstrations. 4.3.1 Virtual Reality including its integration with the Web 3D displays will be used for both immersive and augmented visualization. We will examine VRML as natural interface as this is an open standard built on SGI Inventor technology and is hoped to become the "three dimensional HTML" standard. We will implement volume visualization methods integrated with medical collaboratory systems that will be capable to construct, display, animate, and manipulate 3D visualization of certain medical diagnostic data, like CAT scans. 4.3.2 Compression and network management This task will provide compression and network management technologies to support transport of large images and animation(video) with high resolution The very demanding VR and collaborative applications will stress the network management and require the best compression. We will focus on methods with both high quality and fast encoding and decoding. The most promising technology, wavelets, will be explored and integrated in real time systems meeting the stringent requirements of VR and medical diagnostics, for both still high-resolution images and video. Wavelets appear to require four times less bandwidth than JPEG to record comparable quality images. We will also explore and integrate the most recent networking technologies, including ATM, ISDN, and ADSL for image delivery using the novel network management technologies ensuring high network utilization via integration of the ATM-level flow control mechanisms. 4.3.3 Collaboration technologies including simulated environments Here we will build on results of a Rome Laboratory funded project evaluating several collaboration software and hardware products. We will concentrate on the interoperability issues for different industrial standards and products and on broadening the scope of the applications that can be shared over the network by multiple users. Further we note that lower bandwidth may be used for collaboration over ISDN which is integrated into NYNET at NPAC. At the high-end we will evaluate an Argonne project using an SP-2 to support full 3D virtual environments based on the MOO paradigm. 4.3.4 Geographical Information Systems(GIS) This task will deploy a GIS with data fusion, planning, other overlays and 3D VR interface as needed to support applications. We will use a GIS as the basic spatial interface supporting full 3D terrain navigation in immersive or augmented VR modes. This will build on State funded work for the Living Textbook project. An operational military GIS will need many spatial reasoning, image processing and data fusion overlays. We will select some typical examples appropriate to support chosen demonstrations. We request Rome Laboratory's support in obtaining DMA digital terrain and elevation data as this is of higher resolution that that available through commercial channels at reasonable price. 4.3.5 Parallel and Distributed Multimedia Information systems (Databases) Parallel, high-performance data repositories will be used in all information systems built in the current project. In particular, we will use parallel database technology integrated with the web technology to ensure rapid access and retrieval of all kinds of medical records including medical imagery. We will investigate the integration of ARPA funded concept-based text retrieval software from the University's IST school which will allow the commander (decision-maker) to more effectively analyse both text and text-indexed video. 4.3.6 World Wide web technologies including VR and high resolution video support The proposed applications represent the very high end of existing Web video services but we expect that new Web standards such as VRML (Virtual Reality Modelling Language for common 3D datastructures) and Java (supporting general client applications) combined with parallel servers will allow us to use a Web-based approach to this project. This task will investigate and integrate these emerging Web technologies and standards as appropriate. 4.3.7 Parallel and Distributed Computing including HPCC and CORBA issues. We will use parallel and distributed computing to support simulations using scalable software systems. We will also work with national parallel C++ community to integrate CORBA into the HPCC framework. 4.4 Infrastructure and Systems Integration Thrust III The infrastructure thrust includes upgrades necessary for this project and the integration necessary to support application and technology development as well as deployment in demonstrations. 4.4.1 ATM and ISDN Networking Infrastructure including NYNET This is in place between NPAC and Rome Laboratory but we will greatly benefit from bridging of NYNET to national networks at Rome and extension of ATM capability to SUNY Health Science Center and the several collaborating University departments. 4.4.2 Virtual Reality displays as clients on network at Rome(high-end) and NPAC Here we require expansion of the University's capability and we will build a modest SGI based system that can prototype and demonstrate applications that can use Rome Laboratory equipment for their full exploration. 4.4.3 HPCC information and simulation servers NPAC already has modest size parallel computers including a IBM SP-2, nCUBE2, CM5 and Intel touchstone. We hope to coordinate and collaborate with the nCUBE activity at Rome laboratory where for instance interesting speech recognition work of relevance is being performed. This can be used to produce good indexing of video material. 4.5 Demonstration and Systems Integration Thrust V 4.4.1 Technical Integration This is the responsibility of NPAC and involves the technical integration of infrastructure, technology and applicabilition objects to deliver the functionality identified in the tasks of 4.4.2 and 4.4.3. 4.5.2 Assesment Planning We will develop plans and approaches for the assesment of technology and application object needs in the context of realistic C3 applications and missions. This will identify the structure of demonstrations described in tasks 4.4.4 and 4.4.5. Where possible, existing demonstration programs including the JWID exercises shall be considered as candidates. 4.5.3 Assesment Performance This task involves preparation of demonstration programs including the organization of participants, identification of specific scenarios to be demonstrated, and collection of any necessary background data or support personnel. If the demonstration is part of a larger activity such as JWID, our plans will be coordinated with plans and requirements of the appropriate planning organizations. 4.5.4 NYNET Demonstrations The project will conduct NYNET demonstrations on an ongoing basis especially between Rome Laboratory and NPAC with formal 6 monthly major events. Further We expect an ongoing set of NYNET demonstrations including those involving the Living Textbook. We will feature the results of this project whenever possible. This could require mobile lower end VR equipment for demonstrations that are not at NPAC Cornell or Rome. 4.5.5 National Demonstrations The project will be involved in National Demonstrations such as JWID using ATM connections through Rome Laboratory to LESN (Leading Edge Services Network). These will be identified and planned in tasks 4.4.2 and 4.4.3. 4.6 Reporting and Management 4.6.1 Technical Reporting The demonstrations, applications and technologies will be reported in forms necessary for both the technical and capability impacts. We will identify the products and forums that will allow this program to have the greatest value to the C3 community. The monthly reprts to Rome Laboratory are included in this task. 4.6.2 DoD and Commercial Developer Liasion Liasion will be established and mantained with key parts of DoD including technology offices including Rome Laboratory and arpa as well as the DoD C3 application and commercial development community. The later will be performed as part of NPAC's InfoMall technology transfer organization. 4.6.3 Program Management The program management management task will be split into two with Vanguard responsible for functionality and demonstration management. NPAC will be responsible for management of technology and application component development as well as infrastructure integration. Geoffrey Fox gcf@npac.syr.edu, http://www.npac.syr.edu Phone 3154432163 (Npac central 3154431723) Fax 3154434741 ------- End of Forwarded Message