next up previous
Next: Monte Carlo Methods Up: Machine and Problem Previous: Computational Chemistry and

Computational Fluid Dynamics and Manufacturing (Applications 1, 2, 3, 4, and 32)

 

CFD (Computational Fluid Dynamics) has been a major motivator for much algorithm and software work in HPCC, and indeed extensions of HPF have largely been based on CFD (or similar partial differential equation based applications) and molecular dynamics [Bogucz:94a], [Choudhary:92d;94c], [Dincer:95b],

[Goil:94a;95a], [Hawick:95a;95b], , [HPF:94a]. Partial differential equations can be quite straightforward on parallel machines if one uses regular grids, such as those coming from the simplest finite difference equations. However, modern numerical methods use either finite elements or a refinement strategy for finite elements, which gives rise to irregular meshes. Approaches, such as domain decomposition and multigrid, also give use to complex data structures. From a Fortran programmer's point of view, simple finite differences can be well described by Fortran array data structures. Corresponding parallelization of such applications is well suited to the current HPF language, which is centered in decomposing arrays. All the more advanced partial differential equation schemes naturally need somewhat more sophisticated (than simple arrays) data structures, including arrays of pointers, linked lists, nested arrays, and complex trees. The latter are also seen in fast multipole particle dynamics problems, as well as fully adaptive PDE's [Edelsohn:91b]. Some excellent methods, such as the Berger-Oliger adaptive mesh refinement [Berger:84a] require modest HPF extensions as we have shown in our Grand Challenge work on colliding black holes [Haupt:95a]. However, as Saltz's group has shown in a set of pioneering projects [HPF:94a], many important PDE methods require nontrivial HPF language extensions, as well as sophisticated runtime support, such as the PARTI [Saltz:91b] and CHAOS systems [Edjali:95a], [Hwang:94a], [Ponnusamy:93c;94b]. The needed language support can be thought of as expressing the problem architecture (computational graph as in Figure 3(a), which is only implicitly defined by the standard (Fortran) code. Correctly written, this vanilla Fortran implies all needed information for efficient parallelism. However, this information is realized in terms of the values of pointers and cannot be recognized at compile time for either static or compiler generated dynamic runtime parallelism. This fundamental problem is of course why Fortran is a more successful parallel language than C as latter naturally uses pointer constructs that obscure the problem architecture even more. The runtime support for PDE's must cope with irregular and hierarchical meshes and provide the dynamic alignment decomposition and communications optimization that HPF1 provides for arrays.

Now, let us consider manufacturing as a major industrial application of PDE's. Here, HPCC offers an important opportunity to build futuristic manufacturing systems allowing customizable products with integrated design (conceptual and detailed), manufacturing process, sales and support. This scenario---sometimes called agile manufacturing---implies other major thrusts including concurrent engineering and multidisciplinary analysis and design. Here, we can use an example where NPAC is working under NASA sponsorship with the MADIC (Multidisciplinary Analysis and Design Industry Consortium) collaboration involving Rockwell, Northrop Grumman Vought, McDonnell Douglas, General Electric, General Motors and Georgia Tech. We are in particular involved in establishing for this NASA project, the NII requirements for a future concurrent engineering concept called ASOP (Affordable Systems Optimization Process). Aircraft are now built by multi-company collaborations with international scope which virtual corporation needs collaborative and networked software engineering and workflow support. Configuration management is a critical need. ASOP links a range of disciplines (from manufacturing process simulation, electromagnetic signature, aeronautics and propulsion computation linked to CAD databases and virtual reality visualization) using MDO---multidisciplinary optimization---techniques. The integration of conceptual (initial) and detailed design with the manufacturing and life cycle support phases naturally requires the integration of information and computing in the support system. WebWork [Fox:95a] has been designed for this. Further, we see this problem is a heterogeneous metaproblem with perhaps up to 10,000 (Fortran) programs linked together in the full optimization process. This is basically an embarrassingly parallel meta-architecture with only a few of the programs linked together at each stage. The program complexity varies from a full PDE simulation to an expert system to optimize location of an inspection port to minimize support costs. So efficient parallel solution of PDEs is part, but not all of the support needed for manufacturing. HPCC will only have major impact on manufacturing when it can support such heterogeneous metaproblems, including large scale database integration.



next up previous
Next: Monte Carlo Methods Up: Machine and Problem Previous: Computational Chemistry and



Geoffrey Fox, Northeast Parallel Architectures Center at Syracuse University, gcf@npac.syr.edu