C.8 Proposed Computational Approach
The GEM Problem Solving Environment GEMCI, described in section C.3, requires several technology components. One major component is the detailed simulation modules for the variety of physics and numerical approaches discussed in the preceding sections. This includes the non-local equation solver and physics/friction modules (GEMCI.2,3). The fast multipole, statistical mechanics and cellular automata subsystems will need state of the art algorithms and parallel implementations. These will be built as straightforward MPI Based parallel systems with the overall modular structure implied by our proposed Seismic Framework.
Estimate of Computational Resources Needed: This requires a careful analysis as the algorithms needed for large scale simulations are rather different from those used up to now. We base our analysis on existing GEM simulations from 80 to 64,000 sites and various interaction laws. We also use the measured results from the fast multipole approach to astrophysics simulations with 100 million particles. we estimate that on a 300 Mhz Pentium II an execution time between 4 and 40 milliseconds for each segment and each GEM calculation step. On a 128 node Origin2000, one estimates that a very sophisticated GEM simulation with 100 million segments (corresponding to 100 meter segment sizes) would take between 3 and 36 months. There are many natural ideas to improve the computational complexity and conversly many physical effects that could increase needed computing resources. However these estimates suggest that teraflop class machines will be effective in the very large simulations we aim towards while today we are able to perform meaningful simulations on the machines available to us.
Caltech, Colorado and Syracuse have started building the necessary high performance non-local equation solver modules. The Greens function approach is formulated numerically as a long-range all-pairs interaction problem and we are parallelizing this using the well-known algorithm. However one cannot reach the required level of resolution without switching from an O(N2) to one of the O(N) or O(NlogN) approaches. As in other fields, this can be achieved by dropping or approximating the long range components and implementing a neighbor-list based algorithm. However it is more attractive to formulate the problem as interacting dipoles and adapt existing fast-multipole technology developed for particle dynamics problems. We have already produced a prototype general purpose "fast multipole template code" by adapting the very successful work of Salmon and Warren. These codes have already simulated over 300 million gravitating bodies on a large distributed memory system (4500 processor subset of ASCI "Red"), so we expect these parallel algorithms to efficiently scale up to the problem sizes needed by GEM. If we make the conservative assumption that the GEM dipole-dipole Green's function evaluations are ten times as expensive as the Newtonian Green's functions evaluated in Salmon and Warren's code, then a machine comparable to 1000 300Mhz PentiumII systems should be able to compute between 10 and 100 events per day. Notice that the target level of performance can be achieved through a combination of effective use of parallelism and evolution in the microprocessor market.
Multipolar Representation of Fault Systems: A primary key to a successful implementation of GEM models of systems of faults will be to utilize a computationally efficient set of algorithms for updating the interactions between faults and fault segments. If the Green's function integrals are converted to sums, without truncation or approximation, then this would require O(N^2) operations between earthquakes, and possibly more for segments of faults experiencing an earthquake. As discussed above, the nature of the quasistatic interactions is such that the Green's functions Tijkl and Gijk for linear elasticity have a simple time dependence. Moreover, the Green's functions for linear viscoelasticity and for linear poroelasticity can be obtained from the elastic Green's functions using the Principle of Correspondence (e.g., Lee, 1955; Rundle 1982a,b). These simplifications strongly suggest that an approach based on multipole expansions (Goil, 1994; Goil and Ranka, 1995) will be the most computationally efficient algorithm.
In fact, the stress and displacement Green's functions Tijkl and Gijk actually represent the tensor stress and vector displacement at x due to a point double couple located at x' (see for example, Steketee, 1958). The orientation of the equivalent fault surface normal vector, and of the vector displacement on that fault surface, are described by the indices i and j. Integration of Tijkl and Gijk over the fault surface then correspond to a distribution of double couples over the fault surface. For that reason, representation of the stress over segments of fault in terms of a multipole expansion is the natural basis to use for the GEM computational problem. Details of the computational implementation of this approach will be described in the following sections.
Application of Fast Multipole Methods to GEM: In the gravitational N-body problem, each of N bodies interacts with every other body in the system according to the familiar law of gravitational attraction. Simply computing all pairs of interactions requires N*(N-1)/2 separate evaluations of the interaction law. This formulation of the problem has some important advantages: it is easy to code, it vectorizes and parallelizes well, it is readily expressible in HPF, and it is even amenable to special-purpose hardware [GRAPE]. Nevertheless, even today's fastest special-purpose systems, running dedicated for extended periods at nearly 1Tflop cannot simulate systems larger than about 100,000 bodies.
Tremendous computational savings may be realized by combining bodies into "cells" and approximating their external field with a truncated multipole expansion. When this idea is applied systematically, the number of interactions may be reduced to O(NlgN) (Appel, 1985; Barnes and Hut, 1986) or O(N) (Greengard and Rokhlin, 1987; Anderson, 1992). The cells are generally arranged in a tree, with the root of the tree representing the entire system, and descendants representing successively smaller regions of space. Salmon and Warren (1997) have demonstrated these codes running in parallel on thousands of processors and have simulated highly irregular cosmological systems of over 300 million bodies using ASCI faciliities.
There is a direct analogy between the bodies in an astrophysical N-body system and the fault segments in a GEM. In both cases, there is a pair-wise interaction that naively requires O(N^2) interactions, but by representing the distribution of sources in a region by a multipole expansion, the external field generated by a large number of bodies can be computed to any desired degree of accuracy in constant time. Thus, the GEM problem can be reduced in the same way to O(NlgN) or O(N) total interactions, thereby making large calculations tractable.
Although multipole methods are capable of delivering large performance gains, they also require a considerable infrastructure. This is especially true of efficient parallel implementations. We will develop the multipole version of GEM using a library that has been abstracted from Salmon and Warren's successful astrophysical N-body codes. The continued development of this library, and in particular any new features needed to support GEM will be supported by the project. This new library has several important features:
In the full GEM implementation, we have a situation like the conventional O(N2) N-body problem but there are many important differences. For instance, the critical dynamics -- namely earthquakes -- are found by examining the stresses at each time step to see if the friction law implies that a slip event will occur. As discussed earlier, there are many different proposals for the friction law and the computational system needs to be set up to be able to flexibly change and compare results from different friction laws. The analogies with statistical physics are seen by noting that earthquakes correspond to large-scale correlations with up to perhaps a million 10-to-100 meter segments slipping together. As in critical phenomena, clustering occurs at all length scales and we need to computationally examine this effect. We find differences with the classical molecular dynamics N body problems not only in the dynamical criteria of importance but also in the dependence of the Greens function (i.e. "force" potential) on the independent variables. Another area of importance, which is still not well understood in current applications, will be use of spatially dependent time steps with smaller values needed in earthquake regions. An important difference between true particles and the GEM case is that in the latter, the fault positions are essentially fixed in space. Of course a major challenge in astrophysics is the time dependent gravitational clustering of particles. It may be possible to exploit this simplification in the case of GEM - for example by incrementally improving parallel load-balancing.
We believe a major contribution of this project will be an examination of the software and algorithmic issues in this area with the integration of data and computational modules. We will demonstrate that the use of fine grain algorithmic templates combined with a coarse grained distributed object framework can allow a common framework across many fields.