General Earthquake Models:
A New Computational Challenge
Principal Investigators:
John B. Rundle, University of Colorado, Boulder, CO,Lead Investigator
Geoffrey Fox, NPAC, Syracuse University, Syracuse, NY, Co-Lead Investigator
Members of the GEM Community:
Earth Scientists:
Claude Allegre, Institute de Physique du Globe, Paris,
Yehuda Ben-Zion, University of Southern California
William Bosl, Stanford University & Lawrence Livermore National Laboratory
Steven Day, San Diego State University
James Dieterich, United States Geological Survey
William Foxall, Lawrence Livermore National Laboratory
Tom Henyey, SCEC and University of Southern California
Lawrence Hutchings, Lawrence Livermore National Laboratory
Thomas H. Jordan, Massachusetts Institute of Technology
Hiroo Kanamori, California Institute of Technology
Charles Keller, IGPP, Los Alamos National Laboratory
Louise Kellogg, University of California, Davis
Karen Carter Krogh, EES, Los Alamos National Laboratory
Thorne Lay, University of California, Santa Cruz
Christopher J. Marone, Massachusetts Institute of Technology
John McRaney, SCEC and University of Southern California
J. Bernard Minster, University of California
Amos Nur, Stanford University
Jon Pelletier, California Institute of Technology
Barbara Romanowicz, University of California, Berkeley
John B. Rundle, University of Colorado
Charles Sammis, University of Southern California
Christopher Scholz, Lamont Doherty Earth Observatory & Columbia University
Stewart Smith, University of Washington
Ross Stein, United States Geological Survey
Leon Teng, University of Southern California
Donald Turcotte, Cornell University
Mark Zoback, Stanford University
Earthquake Engineers:
Jacobo Bialek, Carnegie Mellon University
David O'Halloran, Carnegie Mellon University
Condensed Matter Scientists:
James Crutchfield, Santa Fe Institute
William Klein, Boston University
Bruce Shaw, Lamont Doherty Earth Observatory & Columbia University
Computer/Computational Scientists:
Geoffrey Fox, NPAC, Syracuse University
Julian Cummings, ACL, Los Alamos National Laboratory
Roscoe Giles, Boston University
Rajan Gupta, ACL, Los Alamos National Laboratory
Steven Karmesin, ACL, Los Alamos National Laboratory,
Paul Messina, CACR, California Institute of Technology
John Salmon, CACR, California Institute of Technology
Steven Shkoller, CNLS Los Alamos National Laboratory
Bryant York, Northeastern University
John Reynders, ACL, Los Alamos National Laboratory
Michael Warren, ACL, Los Alamos National Laboratory
Tim Williams, ACL, Los Alamos National Laboratory
A. Proposal Abstract:
Earthquakes are devastating natural phenomena that involve physical process occuring over many scales of length and time. Physical dimensions of fracture and slip processes range from the microscopic level to slip on faults over many hundreds of kilometers, whereas temporal processes range from source-time functions with durations of fractions of seconds, to the stress accumulation process that occurs over as long as thousands of years. Recent advances in physical understanding of the basic physical rocesses, coupled with rapid advances in computatonal science and hardware development have enabled a new computational simulation technology that promises to revolutionize our understanding of the dynamics of earthquake fault systems. With the very recent emergence of vast quantities of new geophysical data, including broad-band, high dynamic range digital seismometers, continuously recording GPS systems, and the new Interferometric Synthetic Aperature Radar (InSAR) technology, highly realistic earthquake simulations are now possible that can be validated, calibrated, and used, to rapidly further the long-sought goal of reliable earthquake forecasting. In this proposal, our primary objective is to develop the computational capability to carry out large scale numerical simulations of the physics of earthquakes in southern California and elsewhere. These simulations will produce temporal and spatial patterns of earthquakes, surface deformation gravity change and other variables, that can be used to understand the physics of the earthquake "machine". The General Earthquake Model (GEM) approach will allow the physics of large networks of earthquake faults to be analyzed within a general theoretical framework for the first time. It will stimulate changes in earthquake science in much the same way that General Circulation Models have changed the way atmospheric science is carried out. The computational techniques developed by the project will have important applications in many other large, computationally hard problems, such as 1) Other physical simulations spanning microscopic to macroscopic phenomena; 2) Statistical physics approaches to random field spin systems; and 3) large neural networks with learning and cognition. The project will develop a computational infrastructure GEMCI exploiting modern distributed object and web technologies to integrate researchers, data sources and high performance computational modules covering the non-local equation solvers, physics and friction modules. It will also support highly interactive complex systems based models, which will abstract the GEM simulations to provide predictive capabilities. An algorithmic highlight will be a novel use of fast multipole techniques to solve the GEM Greens function equations.
References:
Web Site: http://www.nap.syr.edu/projects/gem
(This site has information on the GEM team, scientific results, codes and plans)
ftp site: /fractal/users/ftp/pub/Viscocodes/Virtual_California
on
fractal.colorado.edu
(This site has curent versions of the basic numerical codes from Rundle [1988] upon which many of the initial GEM methods will be based, together with results from recent small scale model runs possible on current workstations)
B. Table of Contents
A. Proposal Abstract
B. Table of Contents
C. Project Description
1 One Page Executive Summary for Reviewers
2 A New Computational Challenge
3 Computational Science: Issues and Opportunities
4 GEM Team: Investigator Roles
5 GEM Objectives
6 Complexity and Spatial-Temporal Scales
7 Proposed Scientific Approach
8 Proposed Computational Approach
9 Proposed Software Enviroment and POOMA
10 Calibration and Validation of Simulations with Earthquake Data
D. Performance Goals
E Managment Plan
F Institutional Resource Committments
C PROJECT DESCRIPTION
C.1 One Page Executive Summary for Reviewers:
Objectives: The primary objective is to develop the computational capability to carry out large-scale numerical simulations of the physics of earthquakes in southern California and elsewhere. We will develop a state-of-the-art problem solving environment to facilitate: 1) The construction of numerical and computational algorithms and specific environment(s) needed to carry out large simulations of these complex scale-invariant nonlinear physical processes over a geographically widely distributed, heterogeneous computing network; and 2) The development of computational infrastructure for earthquake "forecasting" and "prediction" methodologies that uses modern distributed object and collaboration technologies with scalable systems, software and algorithms. We will integrate high performance simulations and real-time data sources with an interactive analysis system supporting newly developed Pattern Dynamics methodologies to understanding the evolution of fault slip on complex, scale-invariant fault systems.
Method: We will base our work on currently available small scale workstation-class simulation codes as starting points to model the physics of earthquake fault systems in southern California. The problem solving environment will be developed from the best available parallel algorithms and emerging distributed object based systems. It will leverage state-of-the-art national HPCC activities in simulation of both cellular automata and large-scale particle systems. We will also develop techniques to calibrate and validate simulations with seismic, GPS and InSAR and other data, and to assimilate new data into the simulations.
Scientific and Computational Foci: We will focus on developing the capability to carry out large scale simulations of complex, multiple, interacting fault systems using a software environment adapted for rapid prototyping of new phenomenological models. The software environment will require: 1) Developing algorithms for solving computationally difficult nonlinear problems involving ("discontinuous") thresholds and nucleation events in a networked parallel (super) computing environment; 2) Adapting new "fast multipole" methods previously developed for general N-body problems; 3) Adapting existing modern Web and other commodity technologies to allow researchers to rapidly integrate simulation data with field and laboratory data (visually and quantitatively); 4) Support of an interactive analysis and predictive analysis system .
Significance of Anticipated Results: The GEM approach will allow the physics of large networks of earthquake faults to be analyzed within a general theoretical framework for the first time. Using recent advances in Pattern Dynamics methods for complex nonlinear threshold systems, GEM may lead to several forecast methodologies similar to those now used for El Nino forecasts. The computational techniques developed by the project will have important applications in many other computationally hard problems, such as 1) Statistical physics approaches to random field spin systems; as well as 2) Simulating large neural networks with learning and cognition.
Investigator Team: Our team is internationally recognized in the three areas of 1) earth science 2) statistical mechanics and complex systems and 3) computational science. The latter include world experts in the critical algorithms, software and both HPCC and commodity systems required. We are represented by key people both in universities and national laboratories, so technology transfer to related projects, as well as educational benefits, will follow easily. Rundle will serve as Principal Investigator. The Investigators will participate in periodic workshops at which 1) results will discussed; and 2) specific research avenues will be formulated on a regular and timely basis. We will partner actively with scientists from the existing Southern California Earthquake Center, the new Pacific Earthquake Engineering Research Center, and the proposed California Earthquake Research Center.
C.2 A New Computational Challenge
Rationale for Earthquake Research: Earthquakes, even those significantly smaller than the largest magnitude events of about magnitude 9.5 (e.g., Chile, 1960), are capable of producing enormous damage today and in the future. The recent magnitude ~ 7 January 17, 1994 Kobe, Japan earthquake was responsible for an estimated $200 Billion in damages, accounting for lost economic income as well as direct damage. This event was a complete surprise, inasmuch as the immediate region had been relatively inactive in historic time (see, e.g., Trans. AGU, v 76 Supplement, 1995). It has also been estimated (Insurance Institute for Property Loss Reduction, personal communication, 1996) that a repeat of the 1933 Long Beach California earthquake, which had a maximum Modified Mercalli Intensity of IX, would today cause in excess of $500 billion in damages, rather than the $41 million loss that occurred in 1933. These figures can be compared to the total assets of the US Property Insurance Industry, which is at present about $200 billion (Insurance Institute for Property Loss Reduction, personal communication, 1997). Losses in a repetition of the 1906 San Francisco earthquake would be far larger. The magnitude of these potential losses, even in an economy the size of the United States in 1998, $1.7 trillion, clearly indicate the need to evolve approaches to understand, forecast, and mitigate the risk.
Status: Many basic facts about earthquakes have been known since ancient times (see e.g., Richter, 1958; Scholz, 1990; and the review by Rundle and Klein, 1995), but only during the last hundred years have earthquakes been known to represent the release of stored elastic strain energy along a fault (Reid, 1908). Although a great deal of data has accumulated about the phenomenology of earthquakes in recent years, these events remain one of the most destructive and poorly understood of the forces of nature. The importance of studying earthquakes, as well as the development of techniques to eventually predict or forecast them, has been underscored by the fact that an increasing proportion of global population lives along active fault zones (Bilham, 1996). Recent research indicates that earthquakes exhibit a wealth of complex phenomena over a very large range of spatial and temporal scales, including space-time clustering of events, self-organization and scaling, as represented by the Gutenberg-Richter and Omori relations (e.g., Turcotte, 1992), and space-time migration of activity along fault systems (e.g., Scholz, 1990). Analyzing and understanding these events is made particularly difficult due to the long time scales, usually of the order of tens to hundreds of years between earthquakes near a given location on a fault, and by the space-time complexity of the historical and recent record.
During the last two decades, a series of national policy decisions and programs culminated, eight years ago, in the establishment of the Southern California Earthquake Center (
http://www.scec.org/ ); and the recently founded Pacific Earthquake Engineering Research center (http://shake2.earthsciences.uq.edu.au/ACES/ ). Moreover, an even larger group of Universities have come together to propose the new California Earthquake Research Center, under the NSF Science and Technology Centers program, to succeed SCEC in the year 2001. Together with efforts undertaken several decades ago by the United States Geological Survey (http://www.usgs.gov/themes/earthqk.html ), the accuracy, reliability, and availability of observational data for earthquakes, particularly in southern California, have increased enormously. Moreover, our knowledge about earthquake related processes and the earthquake source itself is at a level far beyond what we knew two decades ago, prior to the National Earthquake Hazard Reduction Program.Despite this, we still find ourselves, as a scientific community, unable to even approximately forecast the time, date, and magnitude of earthquakes. At the moment, the best that can be done is embodied in the Phase II report of earthquake probabilities published by the Southern California Earthquake Center (SCEC, 1995; please refer to SCEC web page). These probabilities are based on "scenario earthquakes" and probabilistic assumptions about whether, for example, contiguous segments of faults ("characteristic earthquakes") do or do not tend to rupture together to produce much larger events. But new Pattern Dynamics methodology (Rundle et al., 1998).
Historically, and even in recent times, earthquake science has been primarily observational in character. However, the inability to carry out repetitive experiments on fault systems means that it is extremely difficult to test hypotheses in a systematic way, via controlled experiments. In addition, time intervals between major earthquakes at a given location are typically much longer than human life spans, forcing reliance on unreliable historical records. This state of affiars is in many respects similar to that of the climate and atmospheric science community in the late 1940's and early 1950's. At that time, John von Neumann and Jule Charney decided to initiate a program to develop what were to become the first General Circulation Models (GCM's) of the atmosphere (see, e.g., Weatherwise, June/July 1995 for a retrospective). These modeling and simulation efforts revolutionized research within the atmospheric science community, because for the first time a feedback loop was established between observations and theory. Hypotheses could be formulated on the basis of computer simulations that could be tested by field observations. Culminating in El Nino predictions. Moreover questions arising from new observations could be answered with computer "experiments". Responding to these new initiatives, the scientific community has begun to develop the capability to model single faults and simple fault systems via simulations (see, for example, Rundle & Klein, 1995 for a recent review; also see Trans Am Geophys, Un. Supplement AGU FAll programs 96 & 97 for abstracts). The various computational methodologies are described in the sections below.
There is clearly a growing consensus in the scientific community that the time has come to establish a similar feedback loop between observations, theory and computer simulations within the field of earthquake science. The goals of the General Earthquake Model (GEM) project are similar to those of the GCM community: 1) to develop sophisticated computer simulations based upon the best available physics, with the goal of understanding the physical processes that determine the evolution of active fault systems, and 2) to develop a Numerical Earthquake Forecast capability that will allow current data to be projected forward in time so that future events can at best be predicted, and at worst perhaps anticipated to some degree.
C.3 Computational Science: Issues and Opportunities
This new computational challenge not only involves a problem of great interest to society and the earth sciences, but also can only be enabled by new research in computational sciences, with the development of new algorithms, computational environments, and hardware. The team must create a computational environment to understand the properties and predict the behavior of critical seismic activity - earthquakes. The complexity of the seismic computational challenge will entail the development of novel algorithms within software environments that support high performance, rapid prototyping of analysis and prediction environments, and integration of real-time data and simulations. One important feature that we can exploit is the lack of major large "legacy" codes and this allows us to adopt up front modern distributed object technology. We have used initial computations to estimate that simulation of fault systems, modeled with 107 segments requires machines of the one to 100 teraflop range. The uncertainty reflects the currently unknown requirements stemming from needed accuracy in simulation of the multiresolution physics of earthquakes. The development of a predictive capability will thus require enormous computational resources, which are comparable to those needed for the large-scale simulations of DoE's ASCI program. We expect such capabilities to be available from general facilities such as the Los Alamos Advanced Computing Laboratory (ACL), NPACI - San Diego, and NCSA - Illinois. Eventually one might expect to set up dedicated resources for earthquake prediction as planned in the major Japanese program in this area. Although these high-end machines may well be distributed shared memory, our software must support the increasingly popular clusters of PC hardware which provide cost-effective development environment. The many levels of complexities present in the current and future generations of New Computational Challenge simulations will demand an interactive team of earth scientists, physicists and computational scientists working together to attack the problem.
We have identified the following major components of GEMCI -- The GEM Computational Infrastructure.
We intend to establish early on an overall Seismic Computational Framework, which will allow the team to develop separately different modules in a way that can be easily integrated together. The optimal methodology for this is quite uncertain but we expect to adopt the approach explained in a new book 'Building Distributed Systems for the Pragmatic Object Web' (co-authored by Fox and his colleagues
http://www.npac.syr.edu/users/shrideep/book/ ). This melds Web (Javabean) and object (CORBA, COM) technologies with design frameworks coming from visual commodity component technologies. As most of our software will be built from scratch, we expect that we can establish and enforce reasonably uniform practices which will lead to GEMCI consisting at a high level of a set of coarse grain "Distributed Scientific Objects". These can be in any language (such as parallel C, C++ Java or Fortran) but with a uniform Javabean applet frontend. Note that cellular automate methods are natural for Fortran or HPF while the complex hierarchical data structures of the fast multipole method is much more natural for C or C++. Perhaps the interactive pattern dynamics analysis system will be a Java applet. These multi paradigm coarse grain objects will be integrated by either commercial CORBA or COM object brokers or using custom technology such as NPAC's WebFlow/JWORB which integrates Web CORBA and COM in a single Java Server (http://tapetus.npac.syr.edu/iwt98/pm/documents/hpdc98/paper.html ). This approach also naturally links databases, instruments and collaboratories with our computational modules and can be linked with approaches such as Globus (http://www.globus.org) to achieve high-performance. We note that we are working on Distributed Scientific Object standards in the international Java Grande Forum (http://www.npac.syr.edu/projects/javaforcse) and have initiated collaborations with Department of Energy scientists in this area. Our approach has a similar philosophy to POOMA (http://www.acl.lanl.gov/PoomaFramework/ ) and especially Nile(http://www.nile.utexas.edu/ ) and Legion (http://www.cs.virginia.edu/~legion/ ). In this proposal, we do not intend to assign significant resources to develop the computer science infrastructure for we using well established parallel computing techniques and imposing a uniform overall design framework to allow commodity distributed object systems such as CORBA to manage the coarse grain structure of GEMCI. We anticipate that a rich set of tools will become available to support this approach. Our clear separation of parallel and object technologies is not the most ambitious approach possible but ensures an excellent system, which can adapt to inevitable change with a modest level of effort.C.4 GEM Team: Investigator Roles
The investigators in the GEM project have been selected because their background
and expertise enables them all to fill necessary functions in the initial development of the
GEM simulations. These functions and roles are organized in the following table.
Note: This table is from the old proposal. New proposal will have an entirely diffenent table, once we get it sorted out.
Name Institution Role*
===== ========== =====
Rundle Colorado Lead Earth Science -- Develop earthquake models,
stat. mech approaches, validation of
simulations (AL, PSE, AN, VA, SCEC)
Fox Syracuse Lead Computer Science -- Develop multipole
algorithms and integrate projects internally
and with external projects including HPCC and WWW communities (AL, PSE, AN))
Klein Boston Statistical mechanics analogies and methods:
Langevin equations for fault systems
dynamics, especially meanfield models (AL,
AN)
Crutchfield Santa Fe Inst Pattern recognition algorithms for earthquake
forecasting, phase space reconstruction of
attractors, predictability (AN)
Giles Boston Object oriented friction model algorithms, Cellular
Automata computations (AL, AN, PSE)
York Northeastern Cellular Automata, implementation of computational
approaches (AL, PSE)
Jordan MIT Validating models against "slow earthquake" data
(VA, SCEC)
Marone MIT Validating friction models against laboratory data
(VA)
Kanamori CalTech Validating models against broadband earthquake
source mechanism data (VA, SCEC)
Krogh LANL Validating fault geometries and interactions using
field data, including stress patterns (VA)
Minster UCSD Implementation/analysis of rate & state friction
laws (AN, SCEC))
Messina, Salmon
Caltech Parallel multipole algorithms, linkage of model validation with simulation ( AL, VA, PSE) Stein USGS Visualization requirements for simulations, validating
models against geodetic data (PSE, SCEC)
Turcotte Cornell Nature of driving stresses from mantle processes,
scaling symmetries, validating models with
data on seismicity patterns and SAR
interferometry data (AN, VA)
Reynders, Williams, White, Warren
ACL, LANL Multipole algorithms, elliptic solvers, adaptive
multigrid methods, optimal use of SMP
systems, load balancing (AL, PSE)
Unfunded Collaborators. They will provide advice on:
=====================================
Frauenfelder & Colleagues
CNLS, LANL Analysis of simulations for patterns, phase space reconstruction, analogies to other systems,
scaling symmetries (AN)
Allegre IPG Paris Statistical models for friction based upon
scale invariance (AN)
Larsen, Hutchings,Foxall, Carrigan
LLNL Green's functions, finite element models for
quasistatic and dynamic problems, faulting in
porous-elastic media, applications to San
Francisco Bay area, visualization
requirements (AL, VA)
*Roles: AL) Algorithms; PSE) Problem Solving Environment; AN) Analysis by statistical mechanics/statistical mechanics; VA) Validation; SCEC) Interaction with SCEC/CalTech and other earthquake data bases
C.5 GEM Objectives
Earthquakes occur over an extremely wide range of spatial and temporal scales. Understanding the physics of earthquakes is complicated by the fact that the large events of greatest interest, having lengths of hundreds of kilometers, recurr at a given location along an active fault only on time scales of the order of hundreds of years (e.g., Richter, 1958; Kanamori, 1983; Pacheco et al., 1992). To acquire an adequate data base of similar large earthquakes therefore requires the use of historical records, which are known to possess considerable uncertainty. Moreover, instrumental coverage of even relatively recent events is often inconsistent, and network coverage and detection levels can change with time (Haberman, 1982). Understanding the details of the rupture process is further complicated by the spatial heterogenity of elastic properties, the vagaries of near field instrumental coverage, and other factors (see for example Heaton, 1990; Kanamori, 1993). Simulations will therefore lead to a much more detailed understanding of the physics of earthquakes, well beyond the present state of affairs in which information exists only on time and space averaged processes. Specific scientific objectives include :
Objectve 1. Cataloguing and understanding the nature and configurations of space-time patterns of earthquakes and whether these are scale-dependent or scale-invarient in space and time (e.g., Scholz, 1990; Ben-Zion & Rice & Eneva**; Rundle et al., 1998). Correlated patterns such as these may indicate whether a given event is a foreshock of a coming mainshock. We want to know how patterns form and persist. One application will be to assess the validity of the "gap" and "antigap" models for earthquake forecasting (e.g., Kagan and Jackson, 1991; Nishenko et al., 1993). Another will be to understand the physical origin of the "action at a distance", and "time delayed triggering", i.e., seismicity that is correlated over much larger distances and time intervals than previously thought (see for example, Hill et al., 1993).
Objective 2. Developing one or more earthquake forecast algorithms, based upon the use of space time patterns or other methods, such as log-periodic (Sornette et al., 1996), bond probability (Coniglio and Klein, 1980), M8 (Keislis-Borok et al; Minster & Williams) or other possible methods to be developed.
Objective 3. Identifying the key parameters that control the physical processes,. We want to understand how fault geometry, friction laws, and earth rheology enter the physics of the process, and which of these are the controlling parameters.
Objective 4. Understanding the role of sub-grid scale processes, and whether these can be represented by uncorrelated or correlated noise.
Objective 5. Understanding the importance of inertia and waves in determining details of space time patterns and slip evolution ( See discussion below).
Objective 6. Developing an understanding of the role of unmodeled processes, including neglected hidden or blind faults, lateral heterogenieity, uncertainties in friction laws, nature of the forcing and earth rheology.
C.6 Complexity and Spatial-Temporal Scales
Heirarchy of Spatial and Temporal Scales: The presence of heirarchies of spatial and temporal scales is a recurring theme in simulations of this type. It is known, for example, that fault and crack systems within the earth are distributed in a scale invariant (fractal) manner over a wide range of scales (Scholz, Brown, Aviles). Moreover, the time intervals between characteristic earthquakes on this fractal system is known to form scale invarient set (Allegre et al., 1982, 1994 1996; Smalley et al., 1985; 1987). Changes in scaling behavior have been observed a length scales corresponding to the thickness of the earth's lithosphere (e.g., Rundle and Klein, 1995), but the basic physics is nevertheless similar over many length scales. It also is known that nucleation and critical phenomena, the physics that are now suspected to govern many earthquake-related phenomena, are associated with divergent length and time scales and long range correlations and coherence intervals (see, e.g., Rundle and Klein, 1995 for a literature review and discussion). For that reason, our philosophical approach to simulations will begin by focusing on the largest scale effects first, working down toward shorter scales as algorithms and numerical techniques improve. Moreover, our practical interest is limited primarily to the largest faults in a region, and to the largest earthquakes that may occur. Therefore, focussing on quasistatic interactions and long wavelenth and long period elastic wave interactions is the most logical approach. We plan to include smaller faults and events as a background "noise" in the simulations, as discussed in the proposed work.
Renormalization Group Applications: Many of the models that have been proposed (see e.g., the review by Rundle and Klein, 1995); Rundle et al. (1996); Klein et al. (1996); Ma (1976), Newman and Gabrielov (1991); Newman et al. (1994); Newman et al (1996); Stauffer and Aharony (1994); Turcotte (1994); are at some level amenable to analysis by Renormalization Group techniques. At the very least, an RG analysis be used to identify and analyze fixed points and scaling exponents, as well as to identify any applicable universality classes. Thus, these techniques can clarify the physics of the processes, and in particular identify the renormalization flows around the fixed point(s), which will help to provide physical insight into the influence of the scaling fields, and the structure of the potential energy surface. Friction models (below) for which this could be particularly helpful include the hierarchical statistical models, the CA models, and the traveling density wave model, where there are expectations of the presence of fixed (critical) points.
C.7 Proposed Scientific Approach
Fundamental Equations: The basic problem to be solved in GEM is the following (e.g., Rundle 1988a): Given a network of faults embedded in an earth with a given rheology, subject to loading by distant stresses, the state of slip s(x,t) on a fault at (x,t) is determined from the equilibrium of stresses according to Newton's Laws:
\F(
¶s(x,t),¶t) = F[S i {si}] (1)where
F[] is a nonlinear functional, and S i {si} represents the sum of all stresses acting. These stresses include 1) the interaction stress sint[x,t; s(x',t'); p] provided by transmission of stress through the earth's crust due to background tractions p applied to the earth's crust, as well as stresses arising from slip on other faults at other sites x' at times t'; 2) the cohesive fault frictional (f) stress sf[x,t; s(x,t)] at the site (x,t) due to processes arising from the state of slip s(x,t); and 3) other stresses such as those due to dynamic stress transmission and inertia. The transmission of stress through the earth's crust involves both dynamic effects arising from the transient propagation of seismic waves, as well as static effects that remain after wave motion has ceased. Rheologic models that for the earth's crust between faults are all linear in nature and will include (e.g., Rundle and Turcotte, 1993) 1) a purely elastic material on both long and short time scales; 2) a material whose instantaneous response is elastic but whose long term deformation involves bulk flow (viscoelastic); and 3) a material that is again elastic over short times, but whose long term response involves stress changes due to the flow of pore fluids through the rock matrix (poroelastic).Green's Functions: Focusing on GEM models that have a linear interaction rheology between the faults implies that the interaction stress can be expressed as a spatial and temporal convolution of a stress Green's function T
ijkl (x-x',t - t') with the slip deficit variable f(x,t) = s(x,t) - Vt, where V is the far field motion of the crust driving slip on the fault. Once the slip deficit is known, the displacement Green's function Gijk(x-x',t - t') can be used to compute, again by convolution, the deformation anywhere in the surrounding medium exterior to the fault surfaces (see Rundle 1988a for details).In the first implementation of GEM models, we will further specialize to the case of quasistatic interactions, even during the slip events. Although we plan to include elastic waves and inertia during synthetic earthquakes in the future (e.g., Aki and Richards, 1980; Zheng et al., 1995; Kanamori, 1993; Beroza, 1995; Jordan, 1991; Imhle et al., 1993). (Perrin & Rice; Shaw), recent work has shown that many important features of earthquakes and slip evolution on faults can be reproduced without including waves (see, Rundle, 1988; Rundle & Klein in Nonlin Proc Geophys, Ben-Zion & Rice). Examples of these features include statistics (references: TDW etc,Carlson & Langer, Ben-Zion & Rice, Shaw; Fisher, Dahmin ), characteristics of source-time functions (Rundle & Klein), and space-time slip patterns (Rundle 1988). Moreover, observational evidence supports the hypothesis that simulations carried out without including inertia and waves will still have significant physical meaning. For example, Kanamori and Anderson (1975, 1997 AGU) estimated that the seismic efficiency
h, which measures the fraction of energy in the earthquake lost to seismic radiation, is less than 5%-10%, implying that inertial effects may be of lesser importance for initial calculations. In later GEM models, the inclusion of elastic waves at least approximately, will be important. Inclusion of these effects will be severely limited by available computational capability, so it is anticipated that it may be practical to include only the longest wavelength, or largest spatial scale, effects. This computational plan is consistent with our philosophical approach to the problem of focusing on the largest scales ("biggest picture") first.In quasistatic interactions, the time dependence of the Green's function typically enters only implicitly through time dependence of the elastic moduli (see, e.g., Lee, 1955). Because of the linearity property, the fundamental problem is reduced to that of calculating the stress and deformation Green's function for the rheology of interest. For materials that are homogenous in composition within horizontal layers, numerical methods to compute these Green's functions for layered media have been developed (e.g., Okada, 1985, 1992; Rundle, 1982a,b, 1988; Rice and Cleary, 1976; Cleary, 1977; Burridge and Varga, 1979; Maruyama, 1994). The problem of heterogeneous media, especially media with a distribution of cracks too small and numerous to model individually is often solved by using effective medium assumptions or self-consistency assumptions (Hill, 1965; Berryman and Milton, 1985; Ivins, 1995a,b.). Suffice to say that a considerable amount of effort has gone into constructing quasistatic Green's functions for these types of media, and while the computational problems present certain challenges, the methods are straightforward because the problems are linear.
Friction Models: At the present time, there are six basic classes of friction laws that have led to computational models.
1. Two basic classes of friction models arise from laboratory experiments:
Slip Weakening model. This friction law (Rabinowicz, 1965; Bowdon and Tabor, 1950; Beeler et al., 1996; Stuart, 1988; Li, 1987; Rice, 1993; Stuart and Tullis, 1995) assumes that the frictional stress at a site on the fault
sf = sf[s(x,t)] is a functional of the state of slip. In general, sf[s(x,t)] is peaked at regular intervals.Rate and State Dependent Friction Laws. These friction laws are based on laboratory sliding experiments in which two frictional surfaces are slid over each other at varying velocities, usually without experiencing arrest (Dieterich, 1972; 1978; 1981; Ruina, 1983; Rice and Ruina, 1983; Ben Zion and Rice). In these experiments, the laboratory apparatus is arranged so as to be much "stiffer" than the experimental surfaces. The rate dependence of these friction laws refers to a dependence on logarithm of sliding velocity. The state dependence refers to a dependence on one or more state variables
qi(t), each of which follows its own relaxation equation.2. Two classes of models have been developed and used that are based on the laboratory models, but are much computationally simple.
Cellular Automaton (CA) models. These are widely used because they are so simple (e.g., Rundle and Jackson, 1977; Nakanishi, 1991; Brown et al., 1991; Rundle and Brown, 1991; Rundle and Klein, 1992; Ben Zion and Rice; Rice & Ben Zion). A static failure threshold, or equivalently a coefficient of static friction
mS is prescribed, along with a residual strength, or equivalently a dynamic coefficient of friction mD. When the stress on a site increases, either gradually or suddenly, to equal or exceed the static value, a sudden jump in slip (change of state) occurs that takes the stress at the site down to the residual value.Velocity Weakening model. This model (Burridge and Knopoff, 1967; Carlson and Langer, 1989; Shaw etc.) is based on the observation that frictional strength diminshes as sliding proceeds. A constant static strength
sf = sF is used as above, after which the assumption is made that during sliding, frictional resistance must be inversely proportional to sliding velocity.3.Two classes of models are based on the use of statistical mechanics involving the physical variables that characterize stress accumulation and failure. The basic goal is to construct a series of nonlinear stochastic equations whose solutions can be approached by numerical means:
Traveling Density Wave and Related Mean Field Models. These models (e.g., Rundle et al., 1996; Gross et al., 1996) are based on the slip weakening model. The principle of evolution towards maximum stability is used to obtain a kinetic equation in which the rate of change of slip depends on the functional derivative of a Lyapunov functional potential. This model can be expected to apply in the mean field regime of long range interactions, which is the regime of interest for elasticity in the earth. Other models in this class include those of Fisher et al (1997) and Dahmin et al (1997).
Hierarchical Statistical Models. Examples include the models by Allegre et al. (1982); Smalley et al. (1985); Blanter et al. (1996); Allegre and Le Mouel (1994); Allegre et al. (1996); Heimpel (1996); Newman et al. (1996); and Gross (1996). These are probabilistic models in which hierarchies of blocks or asperity sites are assigned probabilities for failure. As the level of external stress rises, probabilities of failure increase, and as a site fails, it influences the probability of failure on nearby sites.
Model Methodology: Our general approach will be to examine the simplest models that can be shown to capture a given observable effect, because even the existence of some observational data are often still subjects for research. One example is the debate over whether Heaton-type pulses rather than Kostrov crack-like slip processes characterize slip (e.g., Heaton, 1990; Kostrov, 1964). Another example is the question of whether the Gutenberg-Richter magnitude frequency relation arises from self-organization processes on a single fault with no inherent scale (Burridge and Knopoff, 1967; Rundle and Jackson, 1977), or from a scale invariant distribution of faults (Rice, 1993; Shaw, 1997). A third example is whether inertia is needed in a model to describe the statistics and evolution of slip on faults. It was shown by Nakanishi (1990) that massless CA's can be constructed that give the same quantitative results as the massive slider block models examined by Carlson and Langer (1989). These results include not only the rupture patterns, but also the scaling statistics. A final example is whether or not to include seismic radiation. The absence of inertia does not preclude a time evolution process in the CA, with a separation of loader plate and source time scales, just as in models with inertia and waves (Gabrielov et al., 1994).
C.8 Proposed Computational Approach
The GEM Problem Solving Environment GEMCI, described in section C.3, requires several technology components. One major component is the detailed simulation modules for the variety of physics and numerical approaches discussed in the preceding sections. This includes the non-local equation solver and physics/friction modules (GEMCI.2,3). The fast multipole, statistical mechanics and cellular automata subsystems will need state of the art algorithms and parallel implementations. These will be built as straightforward MPI Based parallel systems with the overall modular structure implied by our proposed Seismic Framework.
Estimate of Computational Resources Needed: (COULD BE SHORTENED)
The basic degrees of freedom in GEM correspond to fault segments with the independent variables corresponding to the slip vector s(i,t or x,t) or more precisely the slip deficit
f(i,t or x,t) = s - Vt, where V is the long term rate of offset. Here the slip deficit represents the deviation from the average long term movement of the fault. In the baseline calculation (Rundle 1988 -- ftp:/fractal/users/ftp/pub/Viscocodes/Virtual_California, on fractal.colorado.edu) there were N=80 fault segments which averaged 30 km (horizontal) by 18 km (vertical) in size. For 5000 years of simulation data on this fault system, execution time was about 20 minutes on a 90 MHz SPARC 5 workstation. In a major realistic simulation, one would expect to need time resolutions of seconds during slip events, and segment sizes of 100 meters resolution as the initial goal in critical regions. This indicates that one needs several (perhaps up to 100) million segments and so scopes the problem as of multi-teraflop proportions! In fact, a 10m meter resolution (corresponding to local magnitude of about 0) may eventually be important with a corresponding two order of magnitude increase in the number of degrees of freedom.In more recent simulations of long range single fault models (Rundle et al., 1996; Klein et al., 1996; Ferguson, 1996), networks of sites much larger than Rundle's original simulations have been modeled. Runs of 100 x 100 sites, having individual sites interacting with 21
2 -1 neighbors, have been carried out for 10,000 earthquakes at a time. (GCF does not understand why these are "long-range" as cut-off -- albeit at 440 neighbors -- what is time interval involved and can we make a single analysis where ratio of Rundle 88 to these is 80*80 / (10000*440) and ratio of CPU's and time intervals -- not given in second case) These runs typically take of the order of several hours on a Pentium 150 MHz single processor desktop machine. Models with 256 x 256 sites have run for 10,000 earthquakes in which all sites interact with each other. Making use of a simple Fourier transform algorithm, these runs take several hours to complete on a CM-5 128 node processor, or current Boston University Power Challenge system. In real time, this would correspond to as much as several years of seismicity on a fault system the size of southern California. (GCF DOES NOT KNOW IF SEVERAL YEARS IS CPU TIME OR TIME OF EARTHQUAKE ACTIVITY SIMULATED)Caltech Colorado and Syracuse have started building the necessary high performance non-local equation solver modules. The Greens function approach is formulated numerically as a long-range N-body problem and we are parallelizing this using the well-known algorithm. However one cannot reach the required level of resolution without switching from an O(N2) to one of the O(N) or O(NlogN) approaches. As in other fields, this can be achieved by dropping or approximating the long range components and implementing an essentially nearest neighbor algorithm. However more attractive is to formulate the problem as interacting dipoles and adapt existing fast-multipole technology developed for particle dynamics problems. We have already produced a prototype general purpose "fast multipole template code" by adapting the very successful work of Salmon and Warren. These codes have already simulated over 100 million particles on ASCI facilities and so we expect these parallel algorithms to efficiently support problem sizes needed by GEM.
Multipolar Representation of Fault Systems: A primary key to a successful implementation of GEM models of systems of faults will be to utilize a computationally efficient set of algorithms for updating the interactions between faults and fault segments. These would normally be of order N*N for times between earthquakes, and possibly worse for segments of faults experiencing an earthquake. As discussed above, the nature of the quasistatic interactions is such that the Green's functions T
ijkl and Gijk for linear elasticity have a simple time dependence. Moreover, the Green's functions for linear viscoelasticity and for linear poroelasticity can be obtained from the elastic Green's functions using the Principle of Correspondence (e.g., Lee, 1955; Rundle 1982a,b). These simplifications strongly suggest that an approach based on multipole expansions (Goil, 1994; Goil and Ranka, 1995) will be the most computationally efficient algorithm.In fact, the stress and displacement Green's functions T
ijkl and Gijk actually represent the tensor stress and vector displacement at x due to a point double couple located at x' (see for example, Steketee, 1958). The orientation of the equivalent fault surface normal vector, and of the vector displacement on that fault surface, are described by the indices i and j. Integration of Tijkl and Gijk over the fault surface then correspond to a distribution of double couples over the fault surface. For that reason, representation of the stress over segments of fault in terms of a multipolar expansion is the natural basis to use for the GEM computational problem. Details of the computational implementation of this approach will be described in the following sections.General N Body Problems: In an N-body simulation, the phase-space density distribution is represented by a large collection of "particles" which evolve in time according to some physically motivated force law. Direct implementation of these systems of equations is a trivial programming exercise. It is simply a double loop. It vectorizes well and it parallelizes easily and efficiently. Unfortunately, it has an asymptotic time complexity of O(N
2). As described above, each of N left-hand-sides is a sum of N-1 right-hand-sides. The fact that the execution time scales as N2 precludes a direct implementation for values of N larger than a few tens of thousands, even on the fastest parallel supercomputers. Special purpose machines (such as GRAPE) have been succesfully used but these do not seem appropriate in an evolving field like GEM while we are still in the prototyping phase.Multipole Methods: Several methods have been introduced which allow N-body simulations to be performed on arbitrary collections of bodies in time much less than O(N
2), without imposition of a lattice (Appel, 1985; Barnes and Hut, 1986; Greengard and Rokhlin, 1987; Anderson, 1992). They have in common the use of a truncated expansion to approximate the contribution of many bodies with a single interaction. The resulting complexity is usually determined to be O(N) or O(N log N) which allows computations using orders of magnitude more particles. These methods represent a system of N bodies in a hierarchical manner by the use of a spatial tree data structure. Aggregations of bodies at various levels of detail form the internal nodes of the tree (cells). Making a good choice of which cells interact and which do not is critical to the success of these algorithms (Salmon and Warren, 1994). N-body simulation algorithms which use adaptive tree data structures are referred to as "treecodes" or "Fast Multipole" methods. By their nature, treecodes are inherently adaptive and are most applicable to dynamic problems with large density contrasts, while fast multipole methods have been mostly non-adaptive and applied to fairly uniform problems. It is likely that this distinction will blur in the future, however.The fundamental approximation employed by a gravitational treecode may be stated as:
S j \F(G mj dij,| dij |3) » \F(G M di cm,| di cm |3) + higher order multipoles
where d
i cm = xi - xcm. Here xcm is the centroid of the particles that appear under the summation on the left-hand side. These Newtonian or Coulomb interactions are a special case of a more general formulation that can be applied to any pairwise interaction or Green's function. Even short-range interactions, i.e., those which can be adequately approximated as vanishing outsidthe GEM collaboration have established the fundamental mathematical framework for determining error bounds for use with arbitrary force laws (Salmon and Warren, 1994,Warren and Salmon, 1995). This error bound is essentially a precise statement of several intuitive ideas which determine the accuracy of an interaction. The multipole formalism is more accurate when the interaction is weak, when it is well-approximated by its lower derivatives, when the sources are distributed over a small region, when the field is evaluated near the center of the local expansion, when more terms in the multipole expansion are used, or when the truncated multipole moments are small.Computational Algorithms: One of two tasks generally takes the majority of the time in a particle algorithm: (1.) Finding neighbor lists for short range interactions. (2.) Computing global sums for long-range interactions. These sub-problems are similar enough that a generic library interface can be constructed to handle all aspects of data management and parallel computation, while the physics-related tasks are relegated to a few user-defined functions. One of our primary computational research objectives will be to define and implement a series of generally useful, independent modules that can be used singly, or in combination, to domain decompose, load-balance, and provide a useful data access interface to particle and spatial data. In combination, these computational modules will provide the backbone for the implementation of our own (and hopefully others) algorithms. These algorithms imply very complex data structures and we intend to continue to use explicit message passing(MPI) as the natural software model. In contrast , the direct N body methods are straightforward to implement in HPF.
Application of Fast Multipole Methods to GEM: Fast multipole methods have already been applied outside the astrophysics and molecular dynamics area. In particular the Caltech and Los Alamos groups have successfully used it for the vortex method in Computational Fluid Dynamics. There are several important differences between GEM and current fast multipole applications, which we briefly discuss below.
In the full GEM implementation, we have a situation like the conventional O(N
2) N-body problem but there are many important differences. For instance, the critical dynamics -- namely earthquakes -- are found by examining the stresses at each time step to see if the friction law implies that a slip event will occur. As discussed earlier, there are many different proposals for the friction law and the computational system needs to be set up to be able to flexibly change and compare results from different friction laws. The analogies with statistical physics are seen by noting that earthquakes correspond to large-scale correlations with up to perhaps a million 10-to-100 meter segments slipping together. As in critical phenomena, clustering occurs at all length scales and we need to computationally examine this effect. We find differences with the classical molecular dynamics N body problems not only in the dynamical criteria of importance but also in the dependence of the Greens function (i.e. "force" potential) on the independent variables. An area of importance, which is still not well understood in current applications, will be use of spatially dependent time steps with smaller values needed in earthquake regions. An important difference between true particles and the GEM case is that in the latter, the fault positions are essentially fixed in space. Of course a major challenge in astrophysics is the time dependent gravitational clustering of particles. It is not clear if one can exploit this simplification in the case of GEM.We believe a major contribution of this project will be an examination of the software and algorithmic issues in this area with the integration of data and computational modules. We will demonstrate that the use of fine grain algorithmic templates combined with a coarse grained distributed object framework can allow a common framework across many fields.
C.9 GEMCI Software Environment
We described our general approach in Section C.3 and have already detailed our overall approach (GEMCI.7) and the critical non-local equation solver techniques (GEMCI.2). Here we sketch key features of the other components.
GEMCI.1: User Interface
This will include a Javabean applet to control execution of the computational modules. This applet will support through Java's introspection mechanism the Seismic Framework with methods to get values of and set parameters and invoke the distributed executable objects. NPAC has substantial experience with this technology, which provides a well-defined way of building seamless interoperable interfaces. Note that although we happen to be very interested in using Java as the coding language for scientific applications, this is not our intention here. Rather a Java front-end will access Fortran, C or C++ (plus MPI) modules, which are wrapped as CORBA or equivalent objects. The Java front-end will support an interactive 2D or 3D map on which one can specify the geometrical properties of the faults and their numerical representation. The system will support access to both computational objects as well as those corresponding to data and visualization resources.
GEMCI.3: Local Physics and Friction Modules
It would be attractive to represent these physics modules in a sophisticated object-oriented framework. This would be possible if we adopted approaches such as Legion or POOMA but we choose a simpler approach here. We build the equation solvers by elaborating templates where the physics modules will interface through defined subroutine interfaces, which will allow us to use modules interchangeably. The Seismic Framework will specify the interfaces that specify not only the module to use but their parameters. These modules are local and hence sequential and must achieve high performance and we expect to use Fortran or C to code them.
GEMCI.4: Evaluation, Data analysis and Visualization (This needs Boston and NPACI Contributions)
As our simulations grow in fidelity, we expect to need increasingly sophisticated visualization capabilities and will base these on the experience of other Grand Challenge projects. We must support both distributed PC and workstation visualization as well as the high-end capabilities at major sites such as Boston and SDSC. This area will be led by our partner SDSC (Is this SDSC or UCSD or NPACI! They and perhaps Boston must add material here)
NPAC has developed a sophisticated collaborative environment TangoInteractive (
http://www.npac.syr.edu/tango) which will be available to support remote interactions among the GEM community. TangoInteractive can be considered as technology to share distributed objects within a rich interactive environment allowing shared text, white-boards and audio-video interactions. The Seismic Framework will facilitate the ease of use of TangoInteractive as it is particularly straightforward to share objects (TangoBeans) supporting the Javabean design rules. Further NCSA has developed a prototype collaborative visualization system using TangoInteractive and this will be available in production mode for the purposes of this proposal. This for instance could enable one group using a high-end ImmersaDesk share visualizations with a remote site running systems like SciVis (http://kopernik.npac.syr.edu:8888/scivis/index.html ) on a PC. This will enable collaborative distributed computational steering of GEM simulations.
GEMCI.5: Data storage, indexing and Access (This waits completion of C.10 and could be included there)
The general model for data as a Persistent Distributed Object is illustrated in the Seismic Framework by using standard relational databases with JDBC (Java Database Connectivity) and CORBA front-ends. This allows convenient access from the user interface and linkage using standard commercial technology.
GEMCI.6: Complex Systems (Pattern Dynamics) Environment
An important feature of GEM is that it will produce both ab initio simulations and predictive systems, which link data, and patterns abstracted from the simulations. The latter is an interactive Rapid Prototyping environment for developing new phenomenological models with their analysis and visualization. It thus has somewhat different trade-offs than the core simulations in that interactivity is perhaps more critical than performance. Other than this difference, we can view the pattern dynamics module as "just" another execution module alternative to those of GEMCI.2,3 and integrated into the same User Interface, Data Access and Visualization subsystems. We will explore using Java applets to develop the complex systems environment but this will still be consistent with the overall Seismic Framework.
C.10 Calibration and Validation of Simulations with Earthquake Data (GCF Thinks that this section does not describe how the data is to be used and linked to the simulations. This should be added and necessary changes made to GEMCI.5,6)
The collection of new observational and laboratory data is not the focus of this proposal. We plan to depend on the data collection and archival activities of the existing Southern California Earthquake Center (SCEC), the new Pacific Earthquake Engineering Research (PEER) center, and the planned California Earthquake Research Center (CERC). Instead, data is viewed here as a means for validating the simulations. The GEM team expects, however, that recommendations for new data collection activities will emerge as a natural outgrowth of the simulations, and that an interesting new feedback loop will emerge between observationalists and modelers as a result of the GEM project.
Management of Earthquake Data: Primary responsibility for earthquake data collection and archiving lies with the SCEC, PEER and CERC, as well as the Seismological Laboratory of the California Institute of Technology, and the Pasadena field office of the United States Geological Survey. Data in these archives are of several kinds, 1) Broadband seismic data from the TERRASCOPE array owned and operated by CalTech, 2) Continous (Southern California Integrated GPS Network) and "campaign style" geodetic data; 3) Paleoseismic data obtained from historic and field trenching studies detailing the history of slip on the major faults of southern California, 4) Near field strong motion accelerograms of recent earthquakes (last two decades), 5) Field structural geologic studies and reflection/refraction data indicating the orientations and geometries of the major active faults, 6) Other sparsely collected kinds of data including pore fluid pressure, in situ stress, and heat flow data. These data will be used, for example, to update the fault geometry models developed under the GEM project, and to update fault slip histories that can in turn be used to validate various earthquake models developed under GEM. Primary responsibility for interacting with elements of this data base will be given to a committee chaired by Kanamori and Jordan, and including a variety of other experts.
A new and extremely promising type of geodetic data is Synthetic Aperature Radar Interferometry, which permits "stress analysis of the earth". A number of SAR missions are curently taking data over the southern California region, including the C-band (5.8 cm) French ERS 1/2 satellites , and the L-band Japanese JERS satellite. These missions have already produced revolutionary images of the complete deformation fields associated with major earthquakes in the United States (e.g., Massonet et al., 1993) and Japan. With these techniques, interferograms are constructed that represent the deformation field at a resolution of a few tens of meters over areas of tens of thousands of square kilometers over time intervals of weeks to years. We are now able to see essentially the complete surface deformation field due to an earthquake, or even due to the interseismic strain accumulation processes. In other examples, current ERS images taken over the Los Angeles basin clearly show deformation associated with fluid withdrawal-induced subsidence and other natural and man-induced processes (G.Peltzer, personal comm., 1996). Other SAR data sets recently compiled clearly show evidence of crustal pore fluid-induced deformation following the June 1992, Magnitude ~ 7 Landers earthquake. These data promise to revolutionize earthquake science. Primary responsibility for interacting with the SAR data will be given to Minster, who is the team leader for the NASA Earth System Science Pathfinder proposal to orbit an L-band Interferometric Synthetic Aperature Radar in the near future.
DRAFT PERFORMANCE GOALS (1 Page!)
Year 1 Major Activities:
Earthquake Physics:
1. Level 0 simulations based on existing codes of Rundle (1988), with 3D geometry, viscoelastic rheology, algorithms for CA TDW, Rate & State friction interfaces
2. Establishment of basic specifications for GIS-type overlays of simulation outputs upon data
3. Use of existing data bases to establish the basic model parameters, including major fault geometries.
4. Analyzing fault interactions to understand effects of screening and frustration
Computational Science, Software Support & System Integration:
1. Quasistatic Green's functions for other kinds of faults, and establishment of their basic multipolar representations
2. Prototype the fast multipole method with changes needed for GEM
3. Prototype optimal approaches for CA - type, TDW and Rate & State computations
4. Develop Seismic Framework with initial user interface and visualization subsystems.
Year 2 Major Activities:
Earthquake Physics:
1. Level I simulations with evolving fault geometries, shear & tensional fractures
2. First calculations with inertia and waves
3. Pattern evaluation and analysis techniques using phase space reconstruction, and machine reconstruction, and other techniques
4. Systems analysis of faults, and analysis of nonplanar geometries
Computational Science, Software Support & System Integration:
1. Develop and use a simple brute force O(N
2) TDW and Rate & State solution system with fixed time and variable spatial resolution, based on adaptive methods2. Test initial parallel multipole schemes with machine benchmarking
3. Incorporate multipole solver on an ongoing basis with friction laws, multiresolution time steps.
Year 3 Major Activities:
Earthquake Physics:
1. Protocols for calibration and validation of full-up simulation capability, numerical benchmarking, scaling properties of models (with SCEC, PEER, CERC)
2. Protocols for assimilation of new data types into models (SCEC, PEER, CERC)
3. Further analysis and cataloguing of patterns, evaluation of limits on forecasting and predictability of simulations
4. Define requirements for future simulations, transfer technology to third parties, outreach to local, state, government agencies as appropriate
Computational Science, Software Support & System Integration:
1. Develop/implement operational Fast Multipole system in terms of full GEMCI
2. Investigate and prototype full time dependent multipole method
3. Fully integrated GEMCI supporting large scale simulations, data access and pattern dynamics analysis.
DRAFT MANAGEMENT PLAN
We base our management plan on experience with 1) current Grand Challenges and the well known NSF STC center CRPC -- Center for Research in Parallel Computation -- where Syracuse and Caltech are members; and 2) the Southern California Earthquake STC Center, for which Rundle chairs the Advisory Council. Rundle, the PI, will have full authority and responsibility for making decisions as to appropriate directions for the GEM KDI project. In particular he will approve budgets and work plans by each contractor and subcontractor. These must be aligned with the general and specific team goals. The PI will be advised by an executive committee made up of a subset of the PI's representing the key subareas and institutions. This committee will meet approximately every 6 months in person and use the best available collaboration technologies for other discussions. Fox will have particular responsibility for overall integration of the computer science activities. Other subareas include Statistical Physics Models (Klein), Cellular Automata (Giles), Data Analysis & Model Validation (Jordan), Parallel Algorithms (Salmon),(ADD Visualization NPACI). The expectation is that the executive committee will operate on a consensus basis. Note that the goals of the KDI project are both Scientific (simulation of Earth Science phenomena) and Computational (development of an object based Problem Solving Environment). The needs of both goals will be respected in all planning processes and contributions in both areas will be respected and viewed as key parts for the mission of the project.
The executive committee will be expanded to a full technical committee comprising at least all the funded and unfunded PI's. The technical committee will be responsible for developing the GEM plan which will be discussed in detail at least every 12 months at the major annual meeting that we intend to hold for scientists inside and outside this project. As well as this internal organization, we expect DOE will naturally set up an external review mechanism. However we suggest that a GEM external advisory committee consisting of leading Earth and Computer Scientists should be set up and that it will attend GEM briefings and advise the PI as to changes of direction and emphasis.
Finally, while the methods to be developed in the proposed work will be generally applicable, the initial modeling focus will be on one of the most well-studied earthquake fault systems in the world, southern California. For that reason, close cooperation and coordination is envisaged with the staff and management of the Southern California Earthquake Center. Coordination will be greatly facilitated since Rundle (PI) is Chair of the Advisory Council of SCEC, Jordan (Co-I) is a member of the Advisory Council, Minster (Co-I) is Chair of the Board of Directors, and Stein (Co-I) is an active member of SCEC.
INSTITUTIONAL RESOURCE COMMITTMENTS
TBD !!