------------------------------------------------------------------------------- [Image] [Image] [Image] Next: Parallel JPEG Up: NYNET High Performance Previous: Financial Modeling Application ------------------------------------------------------------------------------- Electromagnetic Scattering Introduction and Problem description Electromagnetic scattering(EMS) simulation is an important computationally intensive application within the field of electromagnetics. Advances in high performance computing and communication (HPCC) and data visualization environment(DVE) provide new opportunities to visualize real-time simulation problems such as EMS which require significant computational resources. Scientific visualization has traditionally been carried out interactively on workstations, or in post-processing or batch on supercomputers. With advances in high performance computing systems and networking technologies, interactive visualization in a distributed environment becomes feasible. In a remote visualization environment, data, I/O, computation and user interaction are physically distributed through high-speed networking to achieve high performance and optimal use of various resources required by the application task. Seamless integration of high performance computing systems with graphics workstations and traditional scientific visualization is not only feasible, but will be a common practice with real-time application systems. In this work, an integrated interactive visualization environment was created for an EMS simulation, coupling a graphical user interface(GUI) for runtime simulation parameters input and 3D rendering output on a graphical workstation, with computational modules running on a parallel supercomputer and two workstations. Application Visualization System(AVS) was used as integrating software to facilitate both networking and scientific data visualization. This interactive visualization environment can be run from remote and distributed users via the NYNET wide area newtwork with sufficient network bandwidth to support run-time simulation and model parameters steeling. [Image] [Image] Electromagnetic scattering(EMS) is a widely encountered problem in electromagnetics, with important applications in industry such as microwave equipment, radar, antenna, aviation, and electromagnetic compatibility design. Figure 7 illustrates the EMS problem we are modeling. Above an infinite conductor plane, there is an incident EM field in free space. Two slots of equal width on the conducting plane, are interconnected to a microwave network behind the plane. The microwave network represents the load of waveguides, for example, a microwave receiver. The incident EM field penetrates the two slots which are filled with insulation materials such as air or oil. Connected by a microwave network, the EM fields in the two slots interact with each other, creating two equivalent magnetic current sources in the two slots. A new scattered EM field is then formed above the slots. We simulate this physical phenomena and calculate the strength of the scattered EM field under various physical circumstances. The presence of the two slots and the microwave load in this application requires simulation models with high performance computation and communication. Visualization is very important in helping scientists to understand this problem under various physical conditions. In previous work, data parallel and message passing algorithms for this application were developed to run efficiently on massively parallel SIMD machines such as Connection Machine CM-2 and DECmpp-12000, and MIMD machines such as the Connection Machine CM-5 and iPSC/860. The data parallel algorithms run approximately about 400 times faster than sequential versions on a high-speed workstation [9]. Parallel models on high performance systems provides a unique opportunity to interactively visualize the EMS simulation in real-time. This problem requires response time of the simulation cycle that are not possible on conventional hardware. Figure 7 also shows physical parameters of the electromagnetic scattering problem. System Configuration and Integration [Image] [Image] Figure 8 illustrates the system configuration and module components distributed over the network connecting three high-end workstations and a supercomputer Connection Machine 5. The network is a 10 MBit/s Ethernet-based local network. Commercially available AVS software is used to provide sophisticated 3D data visualization and system control functionality required by the simulation. We use AVS to facilitate high level networking and data transfer among visualization and computational modules on different machines in the system. AVS provides a data-channel abstraction that transparently handles type-conversion and module connectivities. This software system is optimized for data movement by using techniques such as shared memory message passing among modules on the same machine. Message passing occurs at a high level of data abstraction in AVS. This approach helps to make optimal use of both the high performance computing resources and the rendering capabilities of the local graphical workstation. The transparent networking capabilities of AVS open up possibilities for visualization far beyond traditional graphics capabilities. The local machine in our system is a IBM RS/6000 with a 24-bit color GTO Graphics Adaptor. An AVS coroutine module (in C) on the local machine serves as a graphical input and system control interface to monitor and collect user runtime interaction with the simulation through keyboard, mouse and other I/O devices. The AVS kernel also runs on the local machine, coordinating data flows and control flows among AVS (remote) modules in the network. The computationally intensive modules of this application are distributed to a CM5, a MIMD supercomputer which is configured 32 processing nodes at NPAC. Each processing node(PN) of the CM5 consists of a SPARC processor for control and non-vector computation, four vector units for numerical computation and 32 MB of RAM. It also includes a Network Interface chip which gives the node access to the CM5 internal Data Network and Control Network. The two internal networks connect all the PNs with a control processor(CP) which runs a custom version of SunOS on a SPARC host. Two Sun SPARC workstations are used in our distributed visualization environment to run the computational modules with modest communication requirements. All modules other than those on the local machine are implemented as AVS remote modules. Their input/output ports are defined by specific AVS libraries for receiving/sending data from/to other (remote) modules via socket connections. This configuration allows the interrupt driven user interface input mechanisms and rendering operations to be relegated to the graphical workstation, while the computationlly intensive components run on the CM5 coupled with the two workstations. This distributed simulation environment implemented in AVS provides a transparent mechanism for using distributed computing resources along with a sophisticated user interface component that permits a variety of interactive, application-specified inputs. The graphical user interface shown in Figure 8 includes a main control panel, three individual input panels, and a 3D rendering window. The main control panel provides the user parameters input and simulation control at runtime. There are seven dial widgets representing simulation parameters used by all computing modules on the three remote machines, and a control button for starting a new simulation cycle(see lower left in Figure 8). Performance Analysis Our experiments show that under a typical working environment(only 0.5 MBits/s of the Ethernet's 10 MBits/s capacity are available), a complete simulation cycle takes about 8 seconds. This response time is quite satisfactory for this application. Table 1 in the Figure 8 lists timing data of major system components. For comparison, timings of sequential implementation on a SUN4 workstation of the two parallel modules are also given in the Table. Conclusion The performance limiting factors in this system are the sequential rendering operations on the local machine, and high-latency data transfer over the local area network due to multiple communication protocol layers. We focus here on the feasibility of applying a high-level distributed programming environment to a real application problem which requires both sophisticated 3D data visualization and high performance computing. ------------------------------------------------------------------------------- [Image] [Image] [Image] Next: Parallel JPEG Up: NYNET High Performance Previous: Financial Modeling Application ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ryadav@npac.syr.edu