PET Strategic Plan for Visualization: Emerging Technologies

A number of available or emerging technologies (both hardware and software) will enable the visualization environment needed by DoD researchers. These trends and technologies are briefly described below, along with notes on how they will influence the DoD visualization environment.

  1. High-speed networks
  2. DREN will support more interactive access to MSRC resources from remote locations, and faster data transfer among DREN-connected sites. We anticipate that greater connectivity will increase researcher demand for remote access to fully interactive 3D visualization and collaborative visualization sessions among groups of distributed researchers. DREN/vBNS interconnects may foster joint work between defense and academic researchers.

  3. Graphics hardware
  4. Desktop Hardware

    The distinction between traditional Unix graphics workstations and Windows-based PC’s will continue to blur. Traditional vendors of Unix-based graphics boxes, such as Hewlett-Packard, already produce a Windows PC; Silicon Graphics Inc. will release their Windows machine later this year. Among traditional PC vendors (Dell, Compaq, etc.) 3D graphics capabilities are increasing rapidly, due to high-speed Pentium Pro processors and a new crop of graphics accelerator cards. This trend will continue -- we will see new multi-engine graphics cards for PC’s this year. (See the PET tracking report at ??). Screen resolution will increase and many displays will be capable of stereoscopic images. Increased graphics capability at reduced cost will raise DoD researcher demands for fully interactive 3D visualization at their desktops. Stereo-capable systems, HDTV resolution, and wide field-of-view monitors will make desktop VR possible for appropriate visualization applications.

    Visual Supercomputing Hardware

    The pairing of graphics vendors with high-performance computing vendors (e.g. SGI and Cray, HP and Convex) gives rise to new architectures for supporting real-time visualization of a simulation. The SGI/Cray Origin2000 can incorporate multiple SGI InfiniteReality (IR) graphics engines. Software support for using the IR for high-speed drawing, frame-buffer capture, and image transmission is under development. HP/Convex had (but abandoned) plans to interface the PixelFlow graphics engine with their high-performance Convex. Integrating high-performance visualization and supercomputing in one box moves us closer to one form of Visual Supercomputing.

  5. Virtual environments, multimodal input devices
  6. Virtual environments, with stereoscopic imagery, immersion, sonification, force-feedback, and gestural and speech interfaces, etc. provide new data exploration opportunities. Through PET, each of the MSRC’s has acquired an ImmersaDesk (is this true at NAVO?). We will augment these VR installations with appropriate multi-modal interface technology (such as force-feedback, speech, sonification, etc.) as they become usable. Note that each characteristic of VR (e.g. stereo imagery or spoken interfaces) can also be used independently in desktop visualization tools.

  7. Automated grid generation, adaptive mesh techniques
  8. Problem set up will become easier with advances in automated grid generation. And adaptive mesh techniques will adjust grids during the course of a simulation run. Use of adaptive mesh algorithms will present new challenges for visualization, since dynamic grids are not yet supported by visualization packages.

  9. Interoperable, multidisciplinary codes
  10. Efforts are underway among various DoD teams to couple codes, where the codes might be from different disciplines, or involve different grid structures or solution strategies (CTH and DYNA3D, or ADCIRC and CEQUAL-IQM, e.g.). Coupled codes will engender needs for more user interaction with a running simulation, since the researcher might need to shape the interaction between the two codes.

  11. Co-processing software systems
  12. Co-processing systems support real-time monitoring or interactive steering of a simulation as it executes. Co-processing provides timely information on the course of the simulation, and can enable the visualization of very large data sets. Indeed for large-data problems, co-processing might be the only practical way to visually analyze the data, since post-processing of such large data sets is, at best, a daunting task. DICE from the Army Research Lab MSRC, Cumulvs from Oak Ridge National Lab, pV3 from MIT, and Utah’s SCIRun system might be appropriate for the DoD researcher. Funded in part by PET, DICE has been integrated with CHSSI codes for CSM, CCM, and CFD and has been used at the ARL, ASC, and WES MSRC’s. Cumulvs will be used in work at WES and ASC to enable real-time monitoring for particular applications. PET will evaluate the early experience with these systems, seeking to identify which system might be useful for certain types of problems. We will also extend the visualization capabilities of each system, and adapt each system to be usable by remote researchers and to support collaborative visualization.

  13. Scalable visualization environments
  14. Parallel vis environments

    Ideally, visualization environments offer analysis capabilities suitable for algorithm development and testing, and can also scale to handle production runs with very large data sets. Parallel and out-of-core implementations of visualization algorithms are needed. The academic community has invested considerable effort on parallel implementations of volume rendering algorithms. The pV3 software system from Haimes at MIT is a more complete system, and offers parallel implementations embedded within a co-processing system for real-time visualization. On the hardware side, both Unix and PC’s are being clustered and used for parallel applications, including visualization.

    Multiresolution algorithms

    There is considerable activity among the visualization research community in the area of multiresolution algorithms. There is experimentation with wavelet algorithms as well as multiresolution geometric representations.

  15. Web-based and/or multimedia visualization support
  16. The evolution of the Web and other multimedia delivery techniques have yielded new technologies that can be used to support collaborative visualization and visualization among remote users. VRML can be appropriate for some applications. Java can be used to write platform-independent visualization tools and distribute them dynamically over the Web. The gif and mpeg formats provide visualization delivery to desktops limited to 2D. Streaming video/audio is newly available, which can be useful for sharing visualization, either to colleagues for collaboration or to publicize a research effort to the general public. At least one commercial effort is aimed at streaming geometry, sending progressively higher-resolution representations to the desktop.

  17. Collaboration environments
  18. End-user products for collaboration are numerous – PictureTel, Microsoft NetMeeting, LotusNotes, and the MBONE tools are just a few of the end-user tools to support conferencing. Building new collaboration tools to support 3D collaborative visualization is facilitated by collaboration "middleware". Among academic efforts, Tango (Syracuse) and Habanero (NCSA) are candidates for middleware support. PlaceWare, from a Xerox PARC spin-off, offers similar support for the application builder. Other emerging technologies that can impact this arena include CORBA (Common Object Request Broker Architecture) and the Java RMI (Remote Method Invocation) capability. Work is needed to evaluate these different technologies, experiment with prototypes, define user requirements, and develop and deploy solutions.

  19. Cross-platform visualization libraries
  20. Visualization libraries emerging from academic and corporate research groups have most often been directed at Unix platforms, particularly SGI graphics workstations. This is changing. For example, the Visualization ToolKit (vtk) from GE Corporate Research runs on both Unix and Windows machines. Theoretically, Java3D will provide a consistent cross-platform environment for graphics development. SGI has released an OpenGL library for Windows, and appears committed to a cross-platform strategy for their new graphics products such as OpenGL++. Some SGI products (their VRML browsers) have been released for Windows machines even before they were available for Unix. On the user interface side, Java again offers hope for cross-platform development. TCL/Tk is also maturing and appears to have a secure future. The emergence of cross-platform graphics, visualization, and user interface libraries will ease the development of tools for those MSRC users, both local and remote, who work on Windows machines. It will also facilitate the development of collaborative tools, since the same application could be distributed to all members of a mixed-platform research team.