Development of an Object Oriented Simulation System 
with Application to Earthquake Physics in Southern 
California

An Investigation in the
General Earthquake Model (GEM) Project

Principal Investigators:

	John B. Rundle, University of Colorado, Boulder, CO,Lead Investigator
	Geoffrey Fox, NPAC, Syracuse University, Syracuse, NY, Co-Lead Investigator
	William Klein, Boston University, Boston, MA 
	John Reynders, ACL, Los Alamos National Laboratory, Los Alamos, NM

All Investigators:

	Earth Scientists:
	Thomas H. Jordan, Massachusetts Institute of Technology, Cambridge, MA
	Hiroo Kanamori, California Institute of Technology, Pasadena, CA
	Karen Carter Krogh, EES, Los Alamos National Laboratory, NM
	Christopher J. Marone, Massachusetts Institute of Technology, Cambridge, MA
	J. Bernard Minster, University of California, San Diego, CA
	John B. Rundle, University of Colorado, Boulder, CO
	Ross Stein, United States Geological Survey, Menlo Park, CA
	Donald Turcotte, Cornell University, Ithaca, NY

	Condensed Matter Scientists:
	William Klein, Boston University, Boston, MA
	James Crutchfield, Santa Fe Institute, Santa Fe, NM

	Computer/Computational Scientists:
	Geoffrey Fox, NPAC, Syracuse University, Syracuse, NY
	Julian Cummings, ACL, Los Alamos National Laboratory, Los Alamos, NM
	Roscoe Giles, Boston University, Boston, MA
	Steven Karmesin, ACL, Los Alamos National Laboratory, Los Alamos, NM
	Paul Messina, CACR, California Institute of Technology, Pasadena, CA
	John Salmon, CACR, California Institute of Technology, Pasadena, CA
	Bryant York, Northeastern University, Boston, MA
	John Reynders, ACL, Los Alamos National Laboratory, Los Alamos, NM
	Michael Warren, ACL, Los Alamos National Laboratory, Los Alamos, NM
	Tim Williams, ACL, Los Alamos National Laboratory, Los Alamos, NM

	Unfunded Collaborators:
	Claude Allegre, Institute de Physique du Globe, Paris, France
	Shawn Larsen, Lawrence Livermore National Laboratory, Livermore, CA
	Lawrence Hutchings, Lawrence Livermore National Laboratory, Livermore, CA
	William Foxall, Lawrence Livermore National Laboratory, Livermore, CA
	Charles Carrigan, Lawrence Livermore National Laboratory, Livermore, CA




Investigator Addresses





John B. Rundle
CIRES CB 216
University of Colorado
Boulder, CO 80309

Geoffrey Fox
Department of Physics
Syracuse Univresity
Syracuse, NY 13244

William Klein
Roscoe Giles
Department of Physics
Boston University
Boston, MA  02215

John Reynders
Tim Williams
Julian Cummings
Steven Karmesin
Karen Carter Krogh
Michael Warren
Advanced Computing Lab
Los Alamos National Lab
Los Alamos, NM 87545


James Crutchfield
Santa Fe Institute
1399 Hyde Park Road
Santa Fe, NM 87501

Thomas Jordan
Christopher Marone
Department of EAPS, MIT
Cambridge, MA  02139

Hiroo Kanamori
Seismological Lab
California Inst. of Technology
Pasadena, CA  91125

J. Bernard Minster
IGPP - SIO
University of California
La Jolla, CA  92093

John Salmon
Department of Physics, CACR
California Inst. of Technology
Pasadena, CA  91125

Ross Stein
Earthquake Hazards Team
United States Geological Survey
Menlo Park, CA  94025


Paul Messina
John Salmon
CACR
California Inst. of Technology
Pasadena, CA 91125

Donald Turcotte
Dept. of Geology
Cornell Univ., Ithaca, NY  
14853

Bryant York
College of Computer Science
Northeastern University
Boston, MA  02115

Claude Allegre
IPG Paris, 4 Place Jussieu
Tour 14-24, 3e etage
75252 Paris Cedex 05
France

Shawn Larsen
Lawrence Hutchings
William Foxall
Charles Carrigan
Lawrence Livermore Nat. Lab.
Livermore, CA  94550




A.	Proposal Abstract:

	Our primary objective is to develop the computational capability to carry out large 
scale numerical simulations of the physics of earthquakes in southern California and 
elsewhere.  These simulations will produce temporal and spatial patterns of earthquakes, 
surface deformation and gravity change, seismicity, stress, as well as, in principle, other 
variables, including pore fluid and thermal changes, for comparison to field and laboratory 
data.  To construct of the simulations, we will develop a state-of-the-art problem solving 
environment that will facilitate: 1) Construction of numerical and computational algorithms 
and specific environment(s) needed to carry out large scale simulations of these nonlinear 
physical processes over a geographically distributed, heterogeneous computing network;  
and 2) Development of a testbed for earthquake "forecasting" & "prediction" methodologies 
which uses modern Object Oriented techniques and scalable systems, software and 
algorithms which are efficient for both people and computational execution time.  The GEM 
approach will allow the physics of large networks of earthquake faults to be analyzed 
within a general theoretical framework for the first time.  It will stimulate changes in 
earthquake science in much the same way that General Circulation Models have changed 
the way atmospheric science is carried out.  The computational techniques developed by the 
project will have important applications in many other large, computationally hard 
problems, such as 1) Large particle-in-cell systems in ASCI; 2) Statistical physics 
approaches to random field spin systems; and 3) Simulating large neural networks with 
learning and cognition.


   



B. Table of Contents

A.	Proposal Abstract								3
B. 	Table of Contents								4
C. 	Project Description

		1	One Page Executive Summary for Reviewers			5
		2	The GEM Grand Challenge						6
		3	Introduction to the GEM Approach					11
		4	Recent Observations of Earthquakes with Significance for GEM	13
		5	Scientific Framework for GEM					14
		6	Data Issues for Validation of Simulations				19
		7	Overview of Computational Science Issues				26
		8	Current Status of Earthquake Simulation Research			27
		9	Project Plan for the GEM Grand Challenge				32
		10	Individual Work Statements						37
		
D.	Letter of Support from David Jackson, Science Director for
	the Southern California Earthquake Center				45
E. 	Bibliography									46
F. 	Computational Facilities							57
G. 	Project Personnel								59
H.	Current and Pending Support						61
I. 	Budget										68

C	PROJECT DESCRIPTION
	C.1	One Page Executive Summary for Reviewers:

Objectives:  The primary objective is to develop the computational capability to carry out 
large scale numerical simulations of the physics of earthquakes in southern California and 
elsewhere.  To meet this objective, we will develop a state of the art problem solving 
environment that will facilitate: 1) The construction of numerical and computational 
algorithms and specific environment(s) needed to carry out large scale simulations of these 
nonlinear physical processes over a geographically widely distributed, heterogeneous 
computing network; and 2) The development of a testbed for earthquake "forecasting" & 
"prediction" methodologies which uses modern Object Oriented techniques and scalable 
systems, software and algorithms which are efficient for both people and computational 
execution time.

 Method:  We will base our work on numerical simulation techniques initially developed 
by Stuart (1986), Rundle (1988) and Ward and Goes (1993), using these efforts as starting 
points to model the physics of earthquake fault systems in southern California.  The 
problem solving environment will be built using the best parallel algorithms and software 
systems.  It will leverage DOE and other state of the art national activities in simulation of 
both cellular automata and large scale particle systems.

Scientific and Computational Foci:  We will focus on developing the capability to 
carry out large scale simulations of complex, multiple, interacting fault systems using a 
software environment optimized for rapid prototyping of new phenomenological models.  
The software environment will require:  1) Developing algorithms for solving 
computationally difficult nonlinear problems involving ("discontinuous") thresholds and 
nucleation events in a networked parallel (super) computing environment, adapting new 
"fast multipole" methods previously developed for general N-body problems; 2) 
Leveraging the Los Alamos ACL Infrastructure, including the POOMA object oriented 
framework, which is already being applied to computationally related adaptive particle 
mesh problems; and 3) Developing a modern problem solving environment to allow 
researchers to rapidly integrate simulation data with field and laboratory data (visually and 
quantitatively).  This task is becoming increasingly important with the exponential increase 
in seismic, as well as space-based deformation data.

Significance of Anticipated Results:  The GEM approach will allow the physics of 
large networks of earthquake faults to be analyzed within a general theoretical framework 
for the first time.  It will stimulate changes in earthquake science in much the same way that 
General Circulation Models have changed the way atmospheric science is carried out.  The 
computational techniques developed by the project will have important applications in many 
other large, computationally hard problems, such as 1) Large particle-in-cell systems in 
ASCI; 2) Statistical physics approaches to random field spin systems; as well as 3) 
Simulating large neural networks with learning and cognition.

Investigator Team and Project Organization:  We have assembled a group of  
internationally recognized experts in the three areas of 1) observational and theoretical earth 
science 2) statistical mechanics and complex systems and 3) computer and computational 
science.  The latter include world experts in the critical algorithms, software and systems 
required. We are represented by key people both inside DOE and in universities in all areas, 
so technology transfer to related projects, as well as educational benefits, will follow 
easily.  Rundle will serve as Principal Investigator.  The Investigators will participate in 
periodic workshops at which 1) results will discussed; and 2) specific research avenues 
will be formulated on a regular and timely basis. 

C.2	The GEM Grand Challenge

	C.2.1 Earthquake Science Status

	Status:  Many basic facts about earthquakes have been known since ancient times 
(see e.g., Richter, 1958; Scholz, 1990; and the review by Rundle and Klein, 1995), but 
only during the last hundred years have earthquakes been known to represent the release of 
stored elastic strain energy along a fault (Reid, 1908).  Although a great deal of data has 
accumulated about the phenomenology of earthquakes in recent years, these events remain 
one of the most destructive and poorly understood of the forces of nature.  The importance 
of studying earthquakes, as well as the development of techniques to eventually predict or 
forecast them, has been underscored by the fact that an increasing proportion of global 
population lives along active fault zones (Bilham, 1996).  Recent research indicates that 
earthquakes exhibit a wealth of complex phenomena, including space-time clustering of 
events,  scaling,  as represented by the Gutenberg-Richter relation (e.g., Turcotte, 1992), 
and space-time migration of activity along fault systems (e.g., Scholz, 1990).  Analyzing 
and understanding these events is made particularly difficult due to the long time scales, 
usually of the order of tens to hundreds of years between earthquakes near a given location 
on a fault, and by the space-time complexity of the historical record.  

	During the last two decades, a series of national policy decisions and programs  
culminated, five years ago, in the establishment of the Southern California Earthquake 
Center  (html://www.usc.edu/dept/earth/quake/).  Togther with efforts undertaken several 
decades ago by the United States Geological Survey 
(html://www.usgs.gov/themes/earthqk.html), the accuracy, reliability, and availability of 
observational data for earthquakes, particularly in southern California, have increased 
enormously.  Moreover, our knowledge about earthquake related processes and the 
earthquake source itself is at a level far beyond what we knew two decades ago, prior to the 
National Earrthquake Hazard Reduction Program.  Finally, it is likely that the data available 
will increase dramatically when an operational Synthetic Aperature Radar Interferometry 
mission begins routine operation (see, e.g., Massonet et al., 1993).

	Despite this, we still find ourselves, as a scientific community, unable to even 
approximately predict the time, date, and magnitude of earthquakes.  At the moment, the 
best that can be done is embodied in the Phase II report of earthquake probabilities 
published by the Southern California Earthquake Center (SCEC, 1995; please refer to 
SCEC web page).  These probabilities are based on "scenario earthquakes" and crude 
assumptions about whether, for example, contiguous segments of faults  ("characteristic 
earthquakes") do or do not tend to rupture together to produce much larger events.  

	Earthquakes, even those significantly smaller than the largest magnitude events of 
about magnitude 9.5, are capable of producing enormous damage.  For example, the 
magnitude ~ 7 January 17, 1994 Kobe, Japan earthquake was responsible for an estimated 
$200 Billion in damages, accounting for lost economic income as well as direct damage.  
This event was a complete surprise, inasmuch as the immediate region had been relatively 
inactive in historic time (see, e.g., Trans. AGU, v 76 Supplement, 1995).  

	Earthquake science has by its nature been observational in character.  The inability 
to carry out repetitive experiments on fault systems means that it is extremely difficult to 
test hypotheses in a systematic way, via controlled experiments.  In addition, time intervals 
between major earthquakes at a given location are typically much longer than human life 
spans, forcing reliance on unreliable historical records.  This state of affiars is in many 
respects similar to that of the atmospheric science community in the late 1940's and early 
1950's.  At that time, John von Neumann and Jule Charney decided to initiate a program to 
develop what were to become the first General Circulation Models (GCM's) of the 
atmosphere (see, e.g., Weatherwise, June/July 1995 for a retrospective).  These modeling 
and simulation efforts revolutionized research within the atmospheric science community, 
because for the first time a feedback loop was established between observations and theory.  
Hypotheses could be formulated on the basis of computer simulations that could be tested 
by field observations.  Moreover questions arising from new observations could be 
answered with computer "experiments".

	There is thus a growing consensus in the scientific community that the time has 
come to establish a similar feedback loop between observations, theory and computer 
simulations within the field of earthquake science.  The goals of the General Earthquake 
Model (GEM) project are similar to those of the GCM community:  1) to develop 
sophisticated computer simulations based upon the best available physics, with the goal of 
understanding the physical processes that determine the evolution of active fault systems, 
and 2) to develop a Numerical Earthquake Forecast capability that will allow current data to 
be projected forward in time so that future events can at best be predicted, and at worst 
perhaps anticipated to some degree.  

	C.2.2  Computational Science Status 

	This grand challenge not only tackles a problem of great interest to society and the 
earth science section of DoE but also will make use of the talent and novel infracture of the 
computer science, algorithm and physics strengths of Los Alamos.  The team must create a 
computational environment to understand the properties and predict the behavior of critical 
seismic activity - earthquakes. The complexity of the seismic grand challenge will entail the 
development of novel algorithms based upon innovative computational architectures with 
new software paradigms.  The development of a predictive capability will require an 
enormous computational resource - one which the Los Alamos Advanced Computing 
Laboratory (ACL) will be able to provide through a clustered Shared Memory Processor 
(SMP) approach to a TeraFlop. The computational resource alone, however, is useless 
without the system substrates necessary to integrate the clustered SMP system into a 
coherent platform and a layered framework of class libraries which separate the essential 
abstractions of the problem domain.  There are several levels of complexities in the seismic 
grand challenge including parallelism, seismic physics, object-orientation, adaptive-mesh-
refinement, and multipole methods. The many levels of complexites present in the current 
and future generations of Grand Challenge simulations will demand an interactive team of 
physicists and computer scientists working together to attack the problem.

	Over the next three years, the team will create an object-oriented Seismic 
FrameWork of layered class libraries for high-performance predictive seismic simulation. 
This effort will build upon the Parallel Object-Oriented Methods and Applications 
(POOMA) FrameWork at the Los Alamos Advanced Computing Laboratory. We have 
already suggested why POOMA is particularly attractive. The multipole approach we 
describe in a later section, which has been widely applied to particle simulation, is not well 
suited to languages like Fortran (and the parallel HPF) because it uses complex hierarchical 
data structures provided by POOMA's C++ base.  Further, the need for rapid prototyping 
of a wide variety of physics modules in GEM calls out for POOMA's object oriented 
capabilities.  We believe POOMA is recognized as the most sophisticated system of its type 
in the HPCC community and Los Alamos has unique capabilities that are needed for the 
support of GEM.

	FrameWork description:  The Seismic FrameWork will provide an integrated 
layered system of objects in which each object higher in the FrameWork is composed of or 
utilizes objects which are lower in the FrameWork. In the case of the POOMA FrameWork 
(upon which the Seismic FrameWork will be built), the higher layers provide objects which 
capture the main abstractions of scientific problem domains (particles, fields, matrices) in a 
representation of mathematical expressions preserved in the application source code. The 
objects which are lower in the FrameWork focus more on capturing the abstractions 
relevant to parallelism and efficient node-level simulation such as communications, domain 
decomposition, threading, and load balancing.

	This layered approach provides a natural breakdown of responsibility in application 
development. Computer science specialists can focus on the lower realms of the 
FrameWork optimizing kernels and message-passing techniques without having to know 
the details of the application being constructed. The physicists, on the other hand, can 
construct algorithms, components, and simulations with objects in the higher levels of the 
FrameWork without having to know the specific implementation details of the objects 
which compose the application. This design allows for efficient team code development 
with no single member of the team having to know all the details of the computer and 
physics problem domains. Indeed, with the Seismic FrameWork it will be possible for a 
scientist to build an application which runs on a variety of distributed memory architectures 
without any understanding of the hierarchical memory and communication present in the 
ACL clustered SMP system.

	Leverage of other DoE Programs:  The Seismic FrameWork will leverage 
with current missions driving the POOMA FrameWork. Most notably, the parallel dipole 
simulation on AMR geometries will benefit directly from current efforts in the ASCI 
program to build a particle-based transport simulation with AMR meshes upon the POOMA 
FrameWork. This effort in turn is built upon particle based abstractions within the POOMA 
FrameWork originally from the ACL Object-Oriented Particles Simulation (OOPS) class 
library - a FrameWork for gyrokinetic plasma simulation built for the Numerical Tokamak 
Grand Challenge. Indeed, most of the key abstractions have already been developed 
granting both the Seismic Grand Challenge effort and the ASCI transport effort an 
enormous benefit in code reuse and program leveraging.

	Systems Integration:  The software infrastructure goals of the Seismic Grand 
Challenge align directly with the infrastructure goals of the ACL. In particular, the object-
based framework approach to building application, analysis, and visualization components 
are a superb fit for the Seismic FrameWork. To this end, our approach to the Seismic 
Grand Challenge will be object-oriented and modular, enabling many of our unique 
capabilities such as multipole-AMR to be encapsulated within reusable components. This 
will enable rapid prototyping of new predictive capabilities into simulations and the 
leveraging of techniques across application domains. We seek to develop and share 
simulation, visualization, and analysis components for the Seismic FrameWork which will 
both build and be built upon capabilities at the ACL and other grand challenges.

	Leverage of Center for Nonlinear Systems, LANL:  It should be noted 
that Hans Frauenfelder, Director of the Center for Nonlinear Science at LANL is an 
unfunded collaborator.  We intend to use personnel at CNLS as a resource in our 
investigations, although the specific modes of interaction and details of topics will depend 
on the nature of the problems that arise during the course of the research.  

	C.2.3 GEM Team: Investigator Roles 

	The investigators in the GEM project have been selected because their background 
and expertise enables them all to fill necessary functions in the initial development of the 
GEM simulations.  These functions and roles are organized in the following table.

Funded Investigators
==============


Name		Institution 		Role*
=====	==========	=====

Rundle	Colorado		Lead Earth Science -- Develop earthquake models,
						stat. mech approaches, validation of 	
						simulations  (AL, PSE, AN, VA, SCEC)
Fox		Syracuse		Lead Computer Science -- Develop multipole 
						algorithms and integrate projects internally
						and with external projects including HPCC 
						and WWW communities (AL, PSE, AN))
Klein		Boston		Statistical mechanics analogies and methods: 
						Langevin equations for fault systems 
						dynamics, especially meanfield models (AL, 
						AN)
Crutchfield	Santa Fe Inst		Pattern recognition algorithms for earthquake 
						forecasting, phase space reconstruction of 
						attractors, predictability (AN)
Giles		Boston		Object oriented friction model algorithms, Cellular 
						Automata computations (AL, AN, PSE)
York		Northeastern		Cellular Automata, implementation of computational 
						approaches (AL, PSE)
Jordan	MIT			Validating models against "slow earthquake" data 
						(VA, SCEC)
Marone	MIT			Validating friction models against laboratory data 
						(VA)
Kanamori	CalTech		Validating models against broadband earthquake 
						source mechanism data (VA, SCEC)
Krogh		LANL			Validating fault geometries and interactions using
						field data, including stress patterns (VA) 
Minster	UCSD		Implementation/analysis of rate & state friction 
						laws (AN, SCEC))
Messina, Salmon
		Caltech		Parallel multipole algorithms, linkage of model 
						validation with simulation ( AL, VA, PSE) 
Stein		USGS		Visualization requirements for simulations, validating 
						models against geodetic data (PSE, SCEC)
Turcotte	Cornell		Nature of driving stresses from mantle processes, 
						scaling symmetries, validating models with 
						data on seismicity patterns and SAR 	
						interferometry data (AN, VA)
Reynders, Williams, White, Warren
		ACL, LANL		Multipole algorithms, elliptic solvers, adaptive 
						multigrid methods, optimal use of SMP 
						systems, load balancing (AL, PSE)

Unfunded Collaborators.  They will provide advice on:
=====================================

Frauenfelder & Colleagues
		CNLS, LANL	Analysis of simulations for patterns, phase space 
						reconstruction, analogies to other systems,
						scaling symmetries (AN)

Allegre	IPG Paris		Statistical models for friction based upon 	
						scale invariance (AN)
Larsen, Hutchings,Foxall, Carrigan
		LLNL			Green's functions, finite element models for 
						quasistatic and dynamic problems, faulting in 
						porous-elastic media, applications to San 
						Francisco Bay area, visualization 	
						requirements (AL, VA)

*Roles:	AL) Algorithms; PSE) Problem Solving Environment; AN) Analysis by 
statistical mechanics/statistical mechanics; VA) Validation; SCEC) Interaction with 
SCEC/CalTech and other earthquake data bases

	C.2.4  GEM Team: Management Plan

	We base our management plan on experience with 1) current Grand Challenges and 
the well known NSF center CRPC -- Center for Research in Parallel Computation -- where
Syracuse and Los Alamos are members; and 2) current NSF STC centers, most especially 
the Southern California Earthquake Center, for which Rundle chairs the Advisory Council.  
Rundle, the PI, will have full authority and responsibility for making decisions as to 
appropriate directions for the GEM Grand Challenge. In particular he will approve budgets 
and work plans by each contractor and subcontractor. These must be aligned with the 
general and specific team goals. The PI will be advised by an executive committee made up 
of a subset of the PI's representing the key subareas and institutions.  This committee will 
meet approximately every 6 months in person and use the best available collaboration 
technologies for other discussions.  Fox will have particular responsibility for overall 
integration of the computer science activities and Reynders for interaction with DOE and 
LANL. Other subareas include Statistical Physics Models (Klein), Cellular Automata 
(Giles), Data Analysis & Model Validation (Jordan), Parallel Algorithms (Salmon).  The 
expectation is that the executive committee will operate on a consensus basis.  Note that the 
goals of the Grand Challenge are both Scientific (simulation of Earth Science phenomena) 
and Computational (development of an object oriented Problem Solving Environment).  
The needs of both goals will be respected in all planning processes and contributions in 
both areas will be respected and viewed as key parts for the mission of the project. 

	The executive committee will be expanded to a full technical committee comprising 
at least all the funded and unfunded PI's. The technical committee will be responsible for 
developing the GEM plan which will be discussed in detail at least every 12 months at the 
major annual meeting that we intend to hold for scientists inside and outside this project.  
As well as this internal organization, we expect DOE will naturally set up an external 
review  mechanism. However we suggest that a GEM external advisory committee 
consisting of leading  Earth and Computer Scientists should be set up and that it will attend 
GEM briefings and advise the PI as to changes of direction and emphasis. 

	C.2.5  Summary of Project and Proposal

	In the following sections,  the GEM project is summarized and the proposed work 
is described in detail.  We begin with an introduction to the GEM (section C.3) that 
describes the rationale for the simulations we propose, then evaluate some of the decisions 
that must be made in executing the computations.  Because all such simulations must be 
continually validated with real world data, we then enumerate (section C.4) some of the 
recent observations that are suggestive of the kind of physics that must be included in the 
model.  We next describe the scientific framework for the GEM approach, focusing in 
particular on the fundamental equations and the Green's functions that will be needed.  We 
finally arrive at one of the most important computational results expected to flow from our 
work, development and application of a set of fast multipole algorithms to computationally 
represent the network of faults in the simulations.  We then discuss the nature of friction on 
fault surfaces, and present a summary of the six most important ideas in this area, followed 
by mention of problems relating to scaling symmetries and renormalization.  In section 
C.6, we dicuss issues relating to use of data to validate our simulations, including both 
existing and anticipated data types.  We then present an overview of the important 
computational science issues (C.7) followed by a review of the current status of earthquake 
simulation research (C.8).  Finally, we present the project plan for the GEM Grand 
Challenge (C.9), and the work plans of the various investigators (C.10).

C.3	Introduction to the GEM Approach

	Rationale for Simulations.  Understanding the physics of earthquakes is 
complicated by the fact that the large events of greatest interest recurr at a given location 
along an active fault only on time scales of the order of hundreds of years (e.g., Richter, 
1958; Kanamori, 1983; Pacheco et al., 1992).  To acquire an adequate data base of similar 
large earthquakes therefore requires the use of historical records, which are known to 
possess considerable uncertainty.  Moreover, instrumental coverage of even relatively 
recent events is often inconsistent, and network coverage and detection levels can change 
with time (Haberman, 1982).  Understanding the details of the rupture process is further 
complicated by the spatial heterogenity of elastic properties, the vagaries of near field 
instrumental coverage, and other factors (see for example Heaton, 1990; Kanamori, 1993).  
We are therefore motivated to use methods that provide insight into details of the rupture 
process which are complementary to the usual observational techniques (e.g., Kanamori, 
1993).

	Given these constraints, numerical simulation is a practical means of obtaining 
experimental information about the behavior of such a nonlinear system over many cycles 
of activity (Binder, 1979; Binder, 1984; Mouritsen, 1984; Ma, 1985; Yeomans, 1992).  In 
fact, other than paleoseismology, it is the only means of obtaining information about the 
dynamics of a given fault system over many earthquake cycles.  And even paleoseismology 
has clear limitations, because information on slip is only available at the surface, and will 
not reflect any of the complexity known to characterize the depth dependence of slip.  

	If adequate theoretical understanding cannot be developed for the simulations, it is 
unlikely that the dynamics of real faults can be similarly understood.  Numerical simulation 
has been used extensively to study earthquakes in the recent past (Rundle, 1988, 1989; 
Rundle & Klein, 1989; Carlson and Langer, 1989; Bak and Tang, 1989; Rundle & 
Turcotte, 1993; Sahimi et al., 1993).   In the case of earthquake faults, complications such 
as 1) the long time scales involved (comparable to the human lifetime) and 2) the (at 
present) unpredictable nature of the events, make earthquakes difficult to study 
systematically in the field.  For this reason, it is advantageous to develop computer 
methods so that the physics of earthquakes can be studied more easily by simulation in the 
computer.  In that environment, predictions that result from alternative hypotheses can be 
readily examined.  Once predictions are made, tests of the hypotheses can be devised that 
demand specific observational programs or laboratory experiments.  In turn, if interesting 
and unexplained new observations are made, simulations can be carried out with a view 
towards explaining them.  

	Simulations will lead to a much more detailed understanding of the physics of 
earthquakes, well beyond the present state of affairs in which information exists only on 
time and space averaged processes.  An improved understanding of the physics of 
earthquake processes would, among other objectives, allow predictions of phenomena such 
as: 
	(1) The probability that a given event is a foreshock of a coming mainshock (e.g., 
Scholz, 1990)
	(2) The validity of the "gap" and "antigap" models for earthquake forecasting (e.g., 
Kagan and Jackson, 1991; Nishenko et al., 1993))
	(3) Spatial correlation lengths and temporal correlation times for faults, which 
would allow the estimation of the effect of activity on one fault upon the activity on the 
same or other faults (e.g., Rundle et al., 1996)
	(4) One or more earthquake forecast algorithms, based upon log-periodic (Sornette 
et al., 1996), bond probability (Coniglio and Klein, 1980), or other possible methods to be 
developed.   
	(5) The physical origin of the "action at a distance", and "time delayed triggering", 
i.e., seismicity that is correlated over much larger distances  and time intervals than 
previously thought (see for example, Hill et al., 1993)

	This large scale project should be viewed as a "testbed" for HPCC technologies.  
Over the past decade, an increasing number of research studies, described below, have 
investigated the physics of failure on individual faults.   The vast majority of this work was 
done on single computers, either workstations or supercomputers.  This work has now 
reached a level of maturity at which the next logical step in the research progression is to 
adopt a "systems approach", to build simulations of systems of faults, and to look at the 
influences of one fault upon another.  In order to make this step, simulations on a scale not 
previously attempted will be necessary, of the order of solving 106 to 109 simultaneous 
nonlinear equations.  These simulations will demand, for example: 1) significant advances 
in numerical algorithm efficiency; 2) advances in messaging and high speed communication 
hardware and software for networking supercomputers, both with each other and with 
other distributed processors; and 3) the development of ideas about how to deal with 
physical processes on  smaller spatial and temporal scales than is practical to include in the 
simulations.

	Finally, while the methods to be developed in the proposed work will be generally 
applicable, the initial modeling focus will be on one of the most well-studied earthquake 
fault systems in the world, southern California.  For that reason, close cooperation and 
coordination is envisaged with the staff and management of the Southern California 
Earthquake Center.  Coordination will be greatly facilitated since Rundle (PI) is Chair of 
the Advisory Council of SCEC, Jordan (Co-I) is a member of the Advisory Council, 
Minster (Co-I) is Chair of the Board of Directors, and Stein (Co-I) is an active member of 
SCEC.   Moreover, Rundle's Ph.D. dissertation advisor was Professor David Jackson, 
who is now the Science Director of SCEC.

	Construction of Models.  In developing the numerical simulation techniques 
proposed here, there are many decisions to be made.  Foremost among these is the physics 
to be included in the simulations, and the associated approximations to be used.  As a 
general principle, we plan to examine the simplest physical models that can be shown to 
capture a given observable effect.  We are motivated to take this approach because even the 
existence of some observational data are often still subjects for research, an example being 
Heaton-type pulses rather than Kostrov crack-like slip processes  (e.g., Heaton, 1990; 
Kostrov, 1964).  In the Heaton model, healing takes place immediately after a site on the 
fault slips, rather than at a delayed time after slip everwhere is complete, as happens in the 
Kostrov model.

	In complex systems such as earthquake faults, it is often found that a given physical 
observable may have its origin in a variety of physical processes, some quite unexpected.  
An example is the question of whether the Gutenberg-Richter magnitude frequency relation 
arises from self-organization processes on a single fault with no inherent scale (Burridge 
and Knopoff, 1967; Rundle and Jackson, 1977), or from a scale invariant distribution of 
faults (Rice, 1993).  For single faults, the two types of models are very different.  The first 
(BK model) includes inertia, whereas the second, a massless Cellular Automaton (CA) 
model, does not.  For the massive BK models, the dynamics must be obtained by solving a 
set of coupled nonlinear differential equations.  It was shown by Nakanishi (1990) that 
massless CA's can be constructed that give the same quantitative results as the massive 
slider block models examined by Carlson and Langer (1989).  These results include not 
only the rupture patterns, but also the scaling statistics.  The decision about whether or not 
to include seismic radiation is another example.  Kanamori and Anderson (1975) estimated 
that the seismic efficiency h, which measures the fraction of energy in the earthquake lost 
to seismic radiation, is less than 5%-10%.  The absence of inertia does not preclude a time 
evolution process in the CA, with a separation of loader plate and source time scales, just 
as in models with inertia and waves.  The physical interpretation of the CA time scales is 
discussed by Gabrielov et al. (1994).  

	Other decisions include the following.  1) Interactions between fault segments 
might be short range, 1/r3 "continuum interactions", or long range of some other variety.  
2) Friction might be a) simple stick slip (Mohr-Coulomb), b) have a simple velocity 
dependence as in the original Burridge-Knopoff model, c) be based on idealized laboratory 
friction experiments on clean, dry, dust-free sliding surfaces such as in the rate and state 
models, or d) be based on new physical ideas assuming the evolution of systems toward 
states of lower energy and greater stability (Rundle et al., 1996).  3) How to include 
"asperities" (Lay et al., 1982) and other regions of heterogeneous friction in the various 
friction models is an open question.  4)  Similarly, how to include unmodeled faults, 
particularly at small length scales is another open question, but presumably these might be 
treated as some kind of "noise" in the model.  5) The effects of pore water might be 
included or ignored, along with the effects associated with variable normal stress on the 
fault.  Which observable data are to be examined will determine the physics to be included 
in a given simulation.  

C.4  Recent Observations of Earthquakes with Significance for GEM

	We refer the interested reader to a recent review of the literature compiled by Rundle and 
Klein (1995).  A brief summary of some relevant material is provided here.  While much research 
in the area of earthquake physics is fundamental in character, the results have many potential 
applications in the areas of earthquake risk and hazard analysis, and seismic zonation (see e.g., 
Cornell et al., 1993; Jackson et al., 1995).  In an important earthquake, the June 28, 1992 Landers 
event (Sieh et al., 1993), observed effects (Hill et al., 1993) indicated that microearthquake activity 
in geothermal areas as remote as 1000 km epicentral distance (Cascades, Yellowstone) were 
triggered by the sudden onset of stress increments associated with the Landers event.  These data 
may imply very long correlation lengths for fault systems, of the order of a thousand km or more.

	A nonevent with important implications was the missing Parkfield, California, M ~ 6 
earthquake, which was originally forecast to occur by January, 1993 (Bakun and Lindh, 1984).  
Retrospective studies have concluded that the pattern of occurrence of the the historic Parkfield 
events is consistent with considerable variation in event interoccurrence time (Savage, 1991; 
1993).  This theme was echoed in work on the validity of the seismic gap hypothesis by Kagan 
and Jackson (1992), who concluded that the dominating effect is the tendency of earthquakes to 
cluster in time.  Space-time clustering of events is an important signature of a nonlinear complex 
system (Stauffer and Aharony, 1994; Ma, 1985).  The tendency for earthquakes to cluster in time 
is supported also by paleoseismic observations of Grant and Sieh (1994).

	Of major significance was the widespread realization that sliding on faults at roughly 
constant stress drop does not necessarily imply a Kostrov-type slip distribution as mentioned in the 
preceeding section.  The Kostrov distribution arises when no healing takes place until progression 
of the rupture front stops.  By contrast, the "Heaton pulse" model (Heaton, 1990) allows healing 
just behind the rupture front, and in fact, there is no a priori reason to exclude other more general 
modes of rupture as well (Rundle and Klein, 1994).  Evidence for rupture pulses and non uniform 
healing behind rupture fronts is now accumulating (Wald et al., 1991; Beroza, 1991; Mendoza et 
al., 1994).

	An important set of observations relates to the existence of "slow" earthquakes precursory 
to mainshock events that generate seismic radiation, the first of which was observed by Kanamori 
and Cipar (1974).  A number of authors (Jordan, 1991; Ihmle et al., 1993;  Kanamori and 
Hauksson, 1993; Kanamori and Kikuchi, 1993; Kikuchi et al., 1993; Wallace et al., 1991) have 
found evidence for slow events preceeding mainshocks.  The degree to which fault plane structure 
(asperities and barriers) control slip has been investigated by  Beck and Christensen (1991) and  
Ruff  (1992), who provide maps of the spatial distribution of enhanced slip on the fault plane.  The 
physical interpretation of asperities in terms of strong regions on the fault depends critically on 
assumptions about the constancy of rupture velocity and directivity (Rundle and Klein, 1994).

	Recent observational work continues to confirm the importance of fault interactions 
(Bilham and Bodin, 1992; Hill et al., 1993; Sanders, 1993), which of course also exist between 
patches on the same fault.  These interactions cause slip on the fault to self-organize (Rundle, 
1988), so that slip occurs in earthquakes obeying the Gutenberg-Richter relation, and not as 
random events (the existence of power law behavior is incompatible with truly random dynamics).  
Other work points out that the existence of scaling implies that small earthquakes are as important 
as large events in redistributing stress (Hanks, 1992), a fact that is clearly seen in simulations.  

	Several new models and ideas have had a significant impact on thinking about the physical 
origins of the observations discussed above.  A new Traveling Density Wave model (Rundle et al., 
1996; Gross et al., 1996), provides a unifying new framework for friction, earthquakes, critical 
phenomena and nucleation, and identifies characteristic earthquakes on fault systems as spinodal 
nucleation events.  Other work attempts to put earthquakes into the same category as the self-
organized criticality model for sandpiles that has no tuning parameter.  Still other researchers have 
proposed that earthquakes are chaotic sequences of events (Narkounskaia et al.,  1992; Huang and 
Turcotte, 1992).

C.5	Scientific Framework for GEM

	C.5.1 Green's Functions and Multipolar Expansions
	
	Fundamental Equations:  The basic problem to be solved in GEM can be simply stated 
(e.g., Rundle 1988a).  Given a network of faults embedded in an earth with a given rheology, 
subject to loading by distant stresses, the state of slip s(x,t) on a fault at (x,t) is determined from 
the equilibrium of stresses according to Newton's Laws.  This can be written schematically as:

	sint[x,t; s(x',t'); p] = sf(x,t; s(x,t))							(1)

where 1) sint is the interaction stress provided by transmission of stress through the earth's crust 
due to background tractions p applied to the earth's crust, as well as stresses arising from slip on 
other faults at other sites x' at times t'; and 2) sf is the cohesive fault frictional (f) stress at the site 
(x,t) due to processes arising from the state of slip s(x,t).  The transmission of stress through the 
earth's crust involves both dynamic effects arising from the transient propagation of seismic 
waves, as well as static effects that remain after wave motion has ceased.  For the GEM project, it 
will suffice to assume for this proposal that, exterior to the fault zones, the earth's crust has a 
rheology (deformation constitutive relation) that is linear elastic, linear viscoelastic, or linear 
poroelastic.  The first assumes only small strain in regions external to the faults; the second allows 
rock to deform by bulk flow in response to applied stresses; and the third allows crustal pore water 
to flow in response to applied stress.  The linear nature of all these constitutive laws allows stress 
from various sources to be superposed, and the assumption of linearity is expected to be 
appropriate in all crustal volumes except very near a fault.  The nature of the frictional stress is the 
subject of current debate, but several candidate models have emerged that we will discuss below.  

	There are a few other model techniques that are being used to simulate faults, the primary 
example being the molecular dynamics simulations of Mora and Place (1994).  At the moment, 
these seem to be primarily useful for simulating dynamics of individual faults, so these will not be 
considered further.

	Green's Functions: Focusing on GEM models that have a linear interaction rheology 
between the faults implies that the interaction stress can be expressed as a spatial and temporal 
convolution of a stress Green's function Tijkl (x-x',t - t') with the slip deficit variable f(x,t) = 
s(x,t) - Vt, where V is the far field motion of the crust driving slip on the fault.  Once the slip 
deficit is known, the displacement Green's function Gijk(x-x',t - t') can be used to compute, again 
by convolution, the deformation anywhere in the surrounding medium exterior to the fault surfaces 
(see Rundle 1988a for details).  

	The three kinds of rheologic models that describe to a good approximation the physical 
nature of the earth's crust between the faults and different sites on the same fault are all linear in 
nature (e.g., Rundle and Turcotte, 1993).   The three rheologies describe 1) a purely elastic 
material on both long and short time scales; 2) a material whose instantaneous response is elastic 
but whose long term deformation involves bulk flow (viscoelastic); and 3) a material that is again 
elastic over short times, but whose long term response involves stress changes due to the flow of 
pore fluids through the rock matrix (poroelastic).  We plan to use all of these models eventually for 
calculation of the interactions stresses between faults, sint.

	In the first implementation of GEM models, we will further specialize to the case of 
quasistatic interactions.  For this case, the time dependence of the Green's function typically enters 
only implicitly through time dependence of the elastic moduli (see, e.g., Lee, 1955).  Because of 
the linearity property, the fundamental problem is reduced to that of calculating the stress and 
deformation Green's function for the rheology of interest.  For materials that are homogenous in 
composition within horizontal layers, numerical methods to compute these Green's functions for 
layered media have been developed (e.g., Okada, 1985, 1992; Rundle, 1982a,b, 1988; Rice and 
Cleary, 1976; Cleary, 1977; Burridge and Varga, 1979; Maruyama, 1994).  The problem of 
heterogeneous media, especially media with a distribution of cracks too small and numerous to 
model individually is often solved by using effective medium assumptions or self-consistency 
assumptions (Hill, 1965; Berryman and Milton, 1985; Ivins, 1995a,b.).  Suffice to say that a 
considerable amount of effort has gone into constructing quasistatic Green's functions for these 
types of media, and while the computational problems present certain challenges, the methods are 
straightforward because the problems are linear.  These will not be discussed further here, and the 
interested reviewer is referred to the literature.

	 In later GEM models, the inclusion of elastic waves at least approximately, will be 
important.  Stresses are initially transmitted in the earth by means of elastic waves (e.g., Aki and 
Richards, 1980; Zheng et al., 1995; Kanamori, 1993; Beroza, 1995; Jordan, 1991; Imhle et al., 
1993).  During an earthquake, these will affect the details of the temporal development of sliding 
during the event.  This is an important effect, and should eventually be included in maximally 
realistic simulations of earthquakes.  Since the time scales over which earthquakes occur are short 
relative to relaxation times for viscous and porous effects, the effects of waves need only be 
calculated in linear elastic, attenuating, dispersive media.  However, as inclusion of these effects 
will be severely limited by available computational capability, it is anticipated that it may be 
practical to include only the longest wavelength, or largest spatial scale, effects.  This 
computational plan is consistent with our philosophical approach to the problem of spatial scales 
that is discussed below.

	Multipolar Representation of Fault Systems: A primary key to a successful 
implementation of GEM models of systems of faults will be to utilize a computationally efficient set 
of algorithms for updating the interactions between faults and fault segments.  These would 
normally be of order N*N for times between earthquakes, and possibly worse for segments of 
faults experiencing an earthquake.  As discussed above, the nature of the quasistatic interactions is 
such that the Green's functions Tijkl and Gijk for linear elasticity have a simple time dependence.  
Moreover, the Green's functions for linear viscoelasticity and for linear poroelasticity can be 
obtained easily from the elastic Green's functions using the Principle of Correspondence (e.g., 
Lee, 1955; Rundle 1982a,b).  These simplifications strongly suggest that an approach based on 
multipole expansions (Goil, 1994; Goil and Ranka, 1995) will be the most computationally 
efficient algorithm.  

	In fact, the stress and displacement Green's functions Tijkl and Gijk actually represent the 
tensor stress and vector displacement at x due to a point double couple located at x' (see for 
example, Steketee, 1958).  The orientation of the equivalent fault surface normal vector, and of the 
vector displacement on that fault surface, are described by the indices i and j.  Integration of Tijkl 
and Gijk over the fault surface then correspond to a distribution of double couples over the fault 
surface.  For that reason, representation of the stress over segments of fault in terms of a multipole 
expansion is the natural basis to use for the GEM computational problem.  Details of the 
computational implementation of this approach will be described in the following sections.

	C.5.2 Friction Models

	Friction Models:  The nature of the nonlinear cohesive stress sf on faults is the subject 
of considerable debate, as will be evident in the discussion below. At the present time, there are six 
basic friction laws that have been proposed for faults, these are discussed briefly below.  Our goal 
in this proposal is to develop computational methods to allow systematic comparison of the 
predictions of these friction models in the context of models with multiple, interacting fault systems 
similar to those found in southern California.   

	Phenomenological Friction Models:

	There are two basic classes of friction models that arise from laboratory experiments.  
These are 1) Slip weakening models, and 2) Rate and state dependent friction models. The former 
has a long history that originates from experiments carried out over a number of decades, the latter 
is a more recent development:

	1) Slip Weakening model.  This friction law (Rabinowicz, 1965; Bowdon and Tabor, 
1950; Stuart, 1988; Li, 1987; Rice, 1993; Stuart and Tullis, 1995) assumes that the frictional stress 
at a site on the fault sf = sf[s(x,t)] is a functional of the state of slip.  In general, sf[s(x,t)] is 
peaked at regular intervals.  The state of slip at a site on a fault in elastic medium, for example, is 
then obtained by solving the quasistatic equation sint[f(x,t)] = sf[s(x,t)] at every instant of time.  
Implementations of this model usually use a series of sweeps through the lattice similar to the CA 
models, successively solving the force balance equation on each sweep.  The advantage of this 
model is that it has a strong basis in an enormous number of laboratory experiments under many 
different kinds of conditions, on many different sliding surfaces (Rabinowicz, 1965; Bowdon and 
Tabor, 1950; Beeler et al., 1996).  The disadvantage is that the quasistatic equation does not really 
capture details of the time dependence of the sliding process.
	
	2)  Rate and State Dependent Friction Laws.  These friction laws are based on certain kinds 
of laboratory sliding experiments in which two frictional surfaces are slid over each other at 
varying velocities, without ever experiencing arrest (Dieterich, 1972; 1978; 1981; Ruina, 1983; 
Rice and Ruina, 1983).  The rate dependence of these friction laws refers to a dependence on 
sliding velocity somewhat the same as in the velocity weakening laws described above, although in 
this case the dependence is on the logarithm of the velocity.  The state dependence refers to a 
dependence on one or more state variables qi(t), each of which follows its own relaxation 
equation.  The effect of both the velocity and the state variable dependence is to introduce a 
"memory effect", so that changes in sliding velocity are reflected in friction variations only after a 
certain time delay.  This friction law has been implemented in simulations of single faults (Dieterich 
and Linker, 1992; Rice, 1993).  The advantage of using this friction model is that it's formulation 
is based on laboratory experiments, however idealized.  Among the disadvantages are the lack of 
understanding of the physical interpretation of the state variables, particularly in the multi-state 
variable formulation.  In addition, the laboratory experiments are not particularly representative of 
the physics of sliding on faults, and certainly not of inertially dominated sliding.  These problems 
lead to predictions from the simulations that are never observed in nature.  Among these 
predictions are 1) infinite friction at arrest of the sliding surfaces; 2) high shear stress and heat 
generation on the fault plane; 3) only periodic large events ever occur; and 4) significant nonzero 
precursory and interseismic quasistatic slip between events.  More recent work by Beeler et al. 
(1994) and Beeler et al. (1996) demonstrates that sliding surfaces do experience arrest, and this 
substantially modifies the friction law and the associated results.

	Models Based on Computational Simplifications:

	These models are essentially simplifications of the two kinds of models described above, 
the major benefit being that they are much simpler to implement computationally.  The two major 
models in this category are 1) Cellular automaton model, and 2) Velocity weakening model.  The 
former is the easiest to implement inasmuch as only simple update rules are needed, the latter was 
first proposed by Burridge and Knopoff (1967):

	1) Cellular Automaton (CA) models.  These are the simplest ("freshman physics") models 
to implement on the computer, and for that reason have been widely used (e.g., Rundle and 
Jackson, 1977; Nakanishi, 1991; Brown et al., 1991; Rundle and Brown, 1991; Rundle and 
Klein, 1992).  A static failure threshold, or equivalently a coefficient of static friction mS is 
prescribed, along with a residual strength, or equivalently a dynamic coefficient of friction mD.  
When the stress on a site increases, either gradually or suddenly, to equal or exceed the static 
value, a sudden jump in slip (change of state) occurs that takes the stress at the site down to the 
residual value.  The exact details of the jump process, and whether it occurs synchronously or 
asynchronously with the jumps at other sites, is an arbitrary decision.  These models assume a 
separation of time scales, the time scale for long term motion being far larger than the time scale on 
which adjustment of the slip state occurs.  To implement the CA dynamics, the time is calculated at 
which the least stable site becomes unstable due to far field motion.  Time is advanced to this point, 
after which it is "frozen" until slip is complete.  The slip state of the unstable site jumps to a new 
value according to the prescribed rule, after which a sweep is conducted through the lattice to find 
other sites that have become unstable (stress larger than failure threshold) as a result of direct 
elastic interaction with the first site.  Any unstable sites are also then allowed to jump 
synchronously, after which further sweeps are carried out, followed by more jumps, etc.  The 
result is an avalanche of failed sites that are analagous to an earthquake.  While the advantage of 
this friction model is ease of computational implementation, the disadvantage is it captures the 
physics of stick-slip sliding only crudely, and allows considerable lattitude in specification of the 
jump mechanism.  

	2) Velocity Weakening model.  This model (Burridge and Knopoff, 1967; Carlson and 
Langer, 1989) is based on the observation that frictional strength diminshes as sliding proceeds.  A 
constant static strength sf = sF is used as above, after which the assumption is made that during 
sliding, frictional resistance must be inversely proportional to sliding velocity.  In the 
implementation of this model, a differential equation is written expressing the balance of forces to 
be solved.  The advantages of this model are that the details of the sliding process can be calculated 
to arbitrary numerical precision, and that inertia of the fault, although not waves, are included.  The 
disadvantages are the ad hoc nature of the friction law, and that practical numerical solutions of 
these equations are limited to one-dimensional models with nearest neighbor interactions having 
only a few hundred sites.

	Models Based on Statistical Mechanics

	These models for friction are based on the fundamental principles that the dynamics of 
faults and fault systems can be described by the construction of probability distributions involving 
the physical variables that characterize stress accumulation and failure.  The basic goal is to 
construct a series of nonlinear stochastic equations whose solutions can be approached by 
numerical means, and which have embedded within them the kinds of statistical distributions that 
are often observed in nature.  An interesting result is that broad classes of these models have been 
found that, at least at the meanfield level, predict essentially the same kinds of Ito-Langevin 
equations for the dynamics.  Recent work (Ferguson et al., manuscript in preparation, 1996) 
indicates that most mean field models can be reduced in a generic way to Ginzburg-Landau type 
functionals, giving rise to metastability and nucleation processes similar to well-understood 
equilibrium systems.  These nonequilibrium models seem to imply that time can be regarded as a 
kind of scaling field, giving a strong theoretical underpinning to the observation of broad classes of 
temporal scaling laws.  The two models in this category are 1) Traveling density wave models, and 
2) Heirarchical statistical models:
		
	1) Traveling Density Wave Model.  This new model (e.g., Rundle et al., 1996; Gross et 
al., 1996) is based on the same experiments that led to the formulation of the slip weakening 
model.  The interaction stress sint is assumed to derived from elasticity, sint = se.  The equality 
between elastic stress based on slip deficit and and frictional stress based on slip, se[f] = sf[s] can 
be trivially rewritten as se[f] = sf[f + Vt], indicating that the solution lies at the intersection of the 
curve se[f] and a leftward moving wave sf[f + Vt].  This equation is then regarded as the Euler-
Lagrange equation derived from a Lyapunov functional U[f].  The principle of evolution towards 
maximum stability then implies a kinetic equation in which the rate of change of f is proportional to 
the functional derivative U[f].  Temporal changes in the friction law itself, related to wearing of the 
frictional contact surfaces is then modeled by random changes in the friction parameters, such as 
the energy of cohesion.  The advantages of this model is that it is based on a large body of data in 
the tribology literature, and that it represents the kind of statistical field theory about which a great 
deal is known, specifically the scaling properties and nucleation rates.  The primary disadvantage at 
the moment is that the equations are of the nonlinear integral-differential kind that are known to 
display chaos and intermittancy.  Thus solution schemes for these equations will be difficult to 
implement computationally.  It should be pointed out that recent work on CA models described 
above (Rundle et al., 1994; Klein et al., 1995) has shown that the internal energy of CA models 
with long range interactions ("mean field" models) are characterized by Boltzmann fluctuations.  
Coarse graining of mean field CA models in space and time yields a Langevin equation for the 
evolution of slip deficit f(x,t) that is essentially identical to that obtained for the TDW model.  

	2) Hierarchical Statistical Models.  Classes of hierachical, statistical models have been 
developed recently.  Examples of these are the models by Allegre et al. (1982); Smalley et al. 
(1985); Blanter et al. (1996); Allegre and Le Mouel (1994); Allegre et al. (1996); Heimpel (1996); 
Newman et al. (1996); and Gross (1996).  These are probabilistic models in which hierarchies of 
blocks or asperity sites are assigned probabilities for failure.  As the level of external stress rises, 
probabilities of failure increase, and as a site fails, it influences the probability of failure on nearby 
sites.  The advantages of these kinds of models is that they are frequently easy to program on a 
computer, have a simple, intuitive appeal, and often can be shown to lead to a scaling behavior, 
both for the events themselves, and for aftershocks and foreshocks (Omori's law).  The 
disadvantages lie in the need to arbitrarily choose the failure probabilities, and the corresponding 
large number of unknown parameters.

	C.5.3 Symmetries and Scaling Properties

	Heirarchy of Spatial and Temporal Scales:  The presence of heirarchies of spatial 
and temporal scales is a recurring theme in simulations of this type.  It is known, for example, that 
fault and crack systems within the earth are distributed in a scale invariant (fractal) manner over a 
wide range of scales.  Moreover, the time intervals between characteristic earthquakes on this 
fractal system is known to form scale invarient set (Allegre et al., 1982, 1994 1996; Smalley et al., 
1985; 1987).  Changes in scaling behavior have been observed a length scales corresponding to the 
thickness of the earth's lithosphere (e.g., Rundle and Klein, 1995), but the basic physics is 
nevertheless similar over many length scales.  It also is known that nucleation and critical 
phenomena, the physics that are now suspected to govern many earthquake-related phenomena, are 
associated with divergent length and time scales and long range correlations and coherence 
intervals (see, e.g., Rundle and Klein, 1995 for a literature review and discussion).  For that 
reason, our philosophical approach to simulations will begin by focusing on the largest scale 
effects first, working down toward shorter scales as algorithms and numerical techniques improve.  
Moreover, our practical interest is limited primarily to the largest faults in a region, and to the 
largest earthquakes that may occur.  Therefore, focussing on quasistatic interactions and long 
wavelenth and long period elastic wave interactions is the most logical approach.  We plan to 
include smaller faults and events as a kind of background "noise" in the simulations, as discussed 
in the proposed work.

	Renormalization Group Analysis:  This topic is closely connected to the previous 
discussion of spatial and temporal scales.  Many of the models that have been proposed (see e.g., 
the review by Rundle and Klein, 1995); Rundle et al. (1996); Klein et al. (1996); Ma (1976), 
Newman and Gabrielov (1991); Newman et al. (1994); Newman et al (1996); Stauffer and 
Aharony (1994); Turcotte (1994);  are at some level amenable to analysis by Renormalization 
Group techniques.  At the very least, an RG analysis be used to identify and analyze fixed points 
and scaling exponents, as well as to identify any applicable universality classes.  Thus, these 
techniques can clarify the physics of the processes, and in particular identify the renormalization 
flows around the fixed point(s), which will help to provide physical insight into the influence of 
the scaling fields, and the structure of the potential energy surface.  Friction models for which this 
could be particularly helpful include the hierarchical statistical models, the CA models, and the 
traveling density wave model, where there are expectations of the presence of fixed (critical) 
points.

C.6	Overview of Current Earthquake Simulation Research

	For the most part, research during the past decade has focused on models of single faults 
embedded in an ideally elastic medium.  Physics of failure processes, particularly the effects of 
heterogeneous friction on the fault surface, have been of interest.  In this case, the interactions 
given by the elastic stress Green's function have a relatively simple spatial dependence.  The more 
interesting case of a three dimensional array of faults is associated with a highly directionally 
dependent interaction.  The physics of these systems will be extremely rich, inasmuch as it is quite 
possible for frustrated interactions and screening effects to arise from the interactions between the 
various faults.  Also, relatively little work has been carried out using inelastic stress Green's 
functions.  The current state of the art for three dimensional fault systems with inelastic interactions 
is still the work of Rundle (1988b).

	Models of Single Faults

	A major theoretical theme in recent years has been the development of increasingly 
sophisticated computer simulation techniques for understanding the dynamics of single faults.  
Recent observational work continues to confirm the importance of fault interactions (Bilham and 
Bodin, 1992; Hill et al., 1993; Sanders, 1993), which of course also exist between patches on the 
same fault.  These interactions cause slip on the fault to self-organize, so that slip occurs in 
earthquakes obeying the Gutenberg-Richter relation, and not as random events (the existence of 
power law behavior is incompatible with truly random dynamics).  Other work points out that the 
existence of scaling implies that small earthquakes are as important as large events in redistributing 
stress (Hanks, 1992), a fact that is clearly seen in simulations.  

	While a few authors use some version of continuum intereactions (stress Green's 
functions) in their models (Rice, 1993; Ben-Zion and Rice, 1993; Ben-Zion et al., 1993; Stuart and 
Tullis, 1995; Taylor et al, 1996; Ben-Zion, 1996), many more use some version of a nearest-
neighbor slider block model.  Rice (1993) in fact argues that nearest-neighbor models are 
inappropriate.  However, it is known from extensive studies over many years on the statistical 
mechanics of spin dipole systems (Ma, 1985) that nearest-neighbor models such as Ising systems 
display much of the same physics as models with dipole (1/r3) interactions.  This is a result of the 
large correlation lengths that appear near a critical point, in association with power law scaling like 
the Gutenberg-Richter relation.  Examples of slider-block models include Carlson et al. (1991); 
Brown et al. (1991); Carlson (1991); Narkounskaia and Turcotte (1992); Narkounskaia and 
Turcotte (1992); Huang et al. (1992); Knopoff et al. (1992); Shaw et al. (1992); Vasconcelos et al. 
(1991); Olami et al. (1992); Rundle and Klein (1993); Ding and Lu (1993); de Sousa et al.(1993); 
Lomnitz-Adler (1993); Pepke et al. (1994); Rundle and Klein (1994).  

	Some of these papers attempt to put earthquakes into the same category as the self-
organized criticality model for sandpiles that has no tuning parameter.  Others show that 
earthquakes are more probably an example of self-organizing systems that can be tuned to 
approach a critical point, the different physics implying membership in a different universality class 
and different pattern formation mechanisms.  These models typically use concepts from percolation 
theory to analyze the simulation data.  Sahimi et al. (1993) use the percolation idea directly to show 
that large earthquakes are associated with the backbone of the percolation cluster of fractures 
forming the fault system in the earth.  Others found chaos in the dynamics of their models 
(Narkounskaia et al.,  1992), even in low order systems (Huang and Turcotte, 1992).  Models can 
in principle also be used as testbeds to develop techniques for earthquake forecasting. Ward 
(1992), Dowla et al. (1992) and Pepke et al. (1994) suggested several methods for testing the 
predictability of synthetic earthquake sequences, using for example neural networks as well as 
pattern recognition.  

	Models of Interacting Fault Systems

	While models of single faults can be used to compute surface deformation associated with 
failure processes (e.g., Stuart and Tullis, 1995), only much more realistic models of three 
dimensional interacting fault systems can approach the kind of detailed predictions necessary in 
using actual data.  This category of model includes that of (Stuart ; 1986; Rundle, 1988b; Ward, 
1992;  and Ward and Goes, 1993).  

	In fact, the types of data from real fault systems to which the GEM simulations will be 
compared include all the same data that characterize natural fault systems (see, e.g., Kanamori, 






















































Real and model faults in southern California (see Rundle, 1988 for details of model calculations)  



















































Space-time diagram for model seismicity, and model friction, as a function of distance along model San 
Andreas fault.  Dark portion in middle is  creeping part along central San Andreas fault in Carrizo Plain.  
Friction parameters in Big Bend section were optimized to reproduce events similar to real data (see figure 3 
below).  




















































Comparison of real and simulated earthquakes.  Slip as a function of distance along the fault is shown for 
several model and real events.  note similarity between real 1857 event, and simulated event at model year 
9707.  Data from Sieh & Jahns (1984)



















































Time dependent deformation field associated with model event at model year 9707. 

































Comparison of real surface strain field in southern California and strain field calculated from simulations.  
Simulation strain field is calculated  at a time corresponding to relationship between 1857 event and 
observed  strain field observed today (e.g., about ~130 years later).   Data from Savage et al. (1986)

1993).  This includes, for example, 1) earthquake histories obtained from paleoseismic 
investigations, other geological data and historical records; 2) fault creep data obtained from 
creepmeters and strain meters; 3) the temporal development of slip on faults as determined by local, 
regional, and teleseismic data, including strong motion instruments; 4) spatial and temporal 
seismicity patterns, together with scaling statistics such as Gutenberg-Richter distributions, Omori 
laws, and foreshock and aftershock statistics; 5) surface deformation data obtained from historical 
geodetic techniques, land-based methods such as multiwavelength trilateration, and newer space-
based methods such as Global Positioning System technology and Synthetic Aperature Radar 
Interferometry; 6) in situ data such as stress and dilatometer measurements, pore fluid pressure, 
and radon emission rates; and 7) earthquake source mechanism compilations. 

	As an example of the approach described above, and the synthetic data that may be 
computed, figures 1- 5 show results obtained by Rundle (1988) in a crude simulation of the San 
Andreas fault system using 80 fault segments embedded in an elastic lithosphere overlying a 
viscoelastic half space.   This model used a modified CA algorithm for a friction law, in which 
changes in fault normal stress were included in the CA version of Amonton's law of friction.  
Additionally, the geometric configuration of the faults is held fixed in time, which for short time 
intervals is probably a good first approximation.  The far field displacements rates are obtained 
from rigid plate motion models.  The simulations clearly show that field data can be simulated to as 
realistic an extent as is desired for comparison with natural, observed data.

C.7	Data Issues for Validation of Simulations

	The collection of new observational and laboratory data is not the focus of this proposal, 
although there may be some secondary need to verify a laboratory result, or to add a new field 
observation to the data base.  Instead, data is viewed here as a means for validating the 
simualtions.  The GEM team expects, however, that recommendations for new data collection 
activities will emerge as a natural outgrowth of the simulations, and that an interesting new 
feedback loop will emerge between observationalists and modelers as a result of the GEM project.

	Management of Existing Earthquake Data

	Primary responsibility for earthquake data collection and archiving lies with the Southern 
California Earthquake Center (SCEC) at the University of Southern California, the Seismological 
Laboratory of the California Institute of Technology, and the Pasadena field office of the United 
States Geological Survey.  Data in these archives are of several kinds, 1) broadband seismic data 
from the TERRASCOPE array owned and operated by CalTech, 2) continous and "campaign 
style" geodetic data obtained through a number of mechanisms such as trilateration, satellite laser 
ranging, VLBI, and GPS, 3) paleoseismic data obtained from historic and field trenching studies 
detailing the history of slip on the major faults of southern California, 4) near field strong motion 
accelerograms of recent earthquakes (last two decades), 5) field structural geologic studies and 
reflection/refraction data indicating the orientations and geometries of the major active faults, 6) 
other sparsely collected kinds of data including pore fluid pressure, in situ stress, and heat flow 
data.  These data will be used, for example, to update the fault geometry models developed under 
the GEM project, and to update fault slip histories that can be used to validate various earthquake 
models developed under GEM.  

	Primary responsibility for interacting with elements of this data base will be given to 
Kanamori, Jordan, Stein, and Minster.

	New Data: Stress Analysis of the Earth from Synthetic Aperature Radar Interferometry

	A number of SAR missions are curently taking data over the southern California 
region.  These include the C-band (5.8 cm) French ERS 1/2 satellites , and the L-band 
Japanese JERS satellite.  These missions have already produced revolutionary images of 
the complete deformation fields associated with major earthquakes in the United States 
(e.g., Massonet et al., 1993) and Japan, and much more is on the way.  Moreover, there 
are at this time several proposals before NASA to initiate a US SAR mission.  A SAR 
instrument as currently envisioned by the investigator team led by Minster and upon which 
Rundle and Turcotte serve will provide the definitive technology for testing GEM models, 
and will lead to a "quantum jump" in our ability to monitor and perhaps forecast these 
events.  As currently configured, the mission will have the advantages of 1) all weather 
imaging; 2) with Left/Right (L/R) radar capability we can image any spot on the earth in 
less than 2 days and probably 1 day; 3) capabilities including 24 day exact repeat, 3 swaths 
every 8 days, ascending/descending orbit repeat of 4 days, L/R repeat of 2 days.  A major 
advantage of this mission will be extremely frequent repeats, so that fine space-time details 
of major events can be studied.  Optimal use of these capabilities depends on having the 
ability to send instructions to the spacecraft, i.e. short response times, and the ability to 
download data and make interferograms rapidly.  

	For surface change missions to study earthquakes, deformation maps produced by SAR 
Interferometry will allow unprecedented levels of detail.  Current models of deformation are very 
crude, and severely limited in detail due essentially to the Uniqueness Theorem of linear elasticity.  
This theorem asserts that displacements and forces everywhere on the boundary of an elastic 
material must be known in order to uniquely infer the forces and displacements within the medium 
that caused those surface effects.  With GPS, only a few tens or at most, hundreds of points will 
ever be available.  However, ISAR images will transform the field from "data poor" to "data rich", 
and it will be possible to study details of earthquakes as never before.  In other words, we will be 
able to carry out "stress analysis of the earth", similar to the stress analysis methodologies 
employed by civil and mechanical engineers to study materials and structures.  This change data 
will in fact be the most important dataset with which to test GEM-type models.  The extremely fine 
details of these models demand an equally highly detailed space-time data set with which to test the 
models.  For example, current ERS images taken over the Los Angeles basin clearly show 
deformation associated with fluid withdrawal-induced subsidence and other natural and man-
induced processes (G.Peltzer, personal comm., 1996).  Other SAR data sets recently compiled 
clearly show evidence of crustal pore fluid-induced deformation following the June 1992, 
Magnitude ~ 7 Landers earthquake.  The extent of these processes must be known in order to test 
and modify the GEM suites of models. 

	Primary responsibility for interacting with the SAR data will be given to Minster, Turcotte 
and Rundle.  Again, Minster is the team leader for the NASA Earth System Science Pathfinder 
proposal to orbit an L-band Interferometric Synthetic Aperature Radar in the near future, and 
Rundle and Turcotte are team members.

C.8	Overview of Computational Science Issues	

	The GEM Problem Solving Environment (PSE) requires several technology 
components which we discuss in the following. One major component is the detailed 
simulation modules for the variety of physics and numerical approaches discussed in the 
preceding sections. These modules will be set up in an object oriented framework using the 
Los Alamos POOMA system. The fast multipole, statistical mechanics and cellular 
automata subsystems will all need state of the art algorithms and parallel implementations. 
However the overall PSE will need a coarse integration to enable both distributed 
collaboration on the simulations and to link simulation, data analysis and visualization. In 
the following we discuss these various aspects of the GEM PSE, starting with the basic 
numerical formulation to set the scene.

	C.8.1 Computational Formulation of Seismicity

	Section C.5 presents an overview of the scientific framework for GEM. The basic 
degrees of freedom in GEM correspond to fault segments with the independent variables 
corresponding to the slip vector s(i,t or x,t) or more precisely the slip deficit f(i,t or x,t).  
Here the slip deficit represents the deviation from the average long term movement of the 
fault.  In the original (Rundle 1988) calculation, there were N=80 fault segments which 
averaged 30 km (horizontal) by 18 km (vertical) in size. In a maximally realistic simulation, 
one would expect to need time and position varying segment sizes with 100 meters as a 
initial goal for size in critical regions. This indicates that one needs several (perhaps up to 
100) million segments and so scopes the problem as of Grand Challenge proportions! In 
fact a 10m meter resolution (corresponding to local magnitude of about 0) is likely to be 
important with a corresponding two order of magnitude increase in the number of degrees 
of freedom.  

	In more recent simulations of long range single fault models (Rundle et al., 1996; 
Klein et al., 1996; Ferguson, 1996), networks of sites much larger than Rundle's original 
simulations have been modeled.  Runs of 100 x 100 sites, with individual sites interacting 
with 212 -1 neighbors, have been carried out for 10,000 earthquakes at a time.  These runs 
typically take of the order of several hours on a Pentium 150 MHz single processor desktop 
machine.  Models with 256 x 256 sites have run for 10,000 earthquakes in which all sites 
interact with each other.  Making use of a simple Fourier transform algorithm, these runs 
take several hours to complete on a CM-5 128 node processor, or current Boston 
University Power Challenge system.  In real time, this would correspond to as much as 
several years of seismicity on a fault system the size of southern California.

	In section C.5, the general framework for the problem is formulated as a set of 
Green's function equations for the stress and displacements at the fault segments.  It is 
convenient to  "think" about fault segments as "particles" interacting with a long range force 
which originates from a "double-couple", i.e. a pair of equal and opposite dipoles.  Initially 
we will ignore time dependence but this approximation will be relaxed (in year 3) with the 
explicit inclusion of radiative time dependence.  This formulation corresponds to  a set of N 
ordinary differential equations -- one for each "particle" i (i.e. fault segment) -- where each 
right hand side has a sum of N terms corresponding to the long range force structure.  At 
this level, we have a situation like conventional the O(N2) N-body problem but there are 
many important differences. For instance, the critical dynamics -- namely earthquakes -- are 
found by examining the stresses at each time step to see if the friction law implies that a slip 
will occur.  As discussed earlier, there are many different proposals for the friction law and 
the computational system needs to be set up to be able to flexibly change and compare 
results from different friction laws.  The analogies with statistical physics are seen by 
noting that earthquakes correspond to large scale correlations with up to perhaps a million 
10-to-100 meter segments slipping together. As in critical phenomena, clustering occurs at 
all length scales and we need to examine computationally this effect.  We find differences 
with the classical molecular dynamics N body problems not only in the dynamical criteria 
of importance but also in the dependence of the Greens function (i.e. "force" potential) on 
the independent variables. Thus we expect that a major algorithmic effort will be needed to 
examine succesful approaches to the classic N body problem and adapt them to GEM. 
Colorado and Syracuse will play a key role in this aspect of the project.

	There are many important and distinct ways of tackling the basic GEM equations 
and these must all be supported by the GEM Problem Solving Environment. Note that a 
sophisticated fully operational fast multipole simulator does not mean that simpler 
approaches become irrelevant. Rather one still needs approximate methods such as cellular 
automata, as these provide important insight as to the fundamental mechanisms underlying 
earthquakes. One can use either direct time evolution or Monte Carlo methods and the 
analogies with statistical mechanics suggest the latter will be valuable although current 
simulations  have not investigated this. However the GEM computational framework must 
support both possibilities.  In the following three subsections, we describe the different 
computational issues in the basic GEM simulation system. The final subsection describes 
the overall system integration.

	C.8.2 Cellular Automata and Statistical Mechanics Approaches to 
GEM

	This section describes two important approaches that allow improved intuition and 
simplification by using formulations familiar from other fields. These need to be supported 
by the GEM PSE and will be a major focus of the initial simulations while the new 
multipole methods are being developed.  In the "Slider-Block" approach one only keeps the 
"nearest-neighbor" interactions and this approximation can be generalized to a screened 
approximation where Green's function elements are ignored when the interacting segments 
are separated by an appropriate distance.  With such a screened force, the computation 
becomes O(N) of course and not the O(N2) of traditional long range force case. In the 
"cellular-automata" approach (used in Rundle 1988), one uses a rather large time step ( 1 
year in Rundle 88) and evolves the "lattice" of fault segments in discrete steps.  In this 
approach, one sees a formulation very similar to that of statistical mechanics with an 
earthquake in GEM corresponding to a long range correlation-induced cluster in the 
statistical physics case. Clustering problems are rather hard to parallelize (we have 
experience with this from Swendsen Wang and other statistical mechanics problems as 
described in Fox et al., 1994) There are often not enough points in the cluster (earthquake) 
to allow much parallelism. In GEM this depends on total number of clustered points at each 
time step -- it must be either zero or very many to allow efficient parallelization. As 
earthquakes (clusters) are not uniform in physical domain (at given time step), one must 
use some sort of cyclic (block scattered) decomposition so that each processor participates 
roughly equally in any given earthquake scenario. The parallelism is hardest in the case of a 
screened or O(N) force computation.  As in the long range force version, one can get 
parallelism in force computation, and more generally, the computation of forces (which is 
easy to parallelise) becomes a larger part of the computation time.  The level of effort 
needed in parallel cellular automata methods will depend on experience gained.  We do 
know how to parallelize the general fast multipole method and this may be the approach of 
choice on large systems with the CA methods used on smaller SMP's without explicit 
parallelism.  There are some important software differences between CA and fast multipole 
methods. The former does not typically need sophisticated data structures and so 
approaches such as HPF have been effectively used in this problem class.

	C.8.3 The Fast Multipole Algorithm

	General N Body Problems:  In an N-body simulation, the phase-space density 
distribution is represented by a large collection of "particles" which evolve in time 
according to some physically motivated force law.  Direct implementation of these systems 
of equations is a trivial programming exercise.  It is simply a double loop.  It vectorizes 
well and it parallelizes easily and efficiently.  Unfortunately, it has an asymptotic time 
complexity of O(N2).  As described above, each of N left-hand-sides is a sum of N-1 right-
hand-sides.  The fact that the execution time scales as N2 precludes a direct implementation 
for values of N larger than a few tens of thousands, even on the fastest parallel 
supercomputers. Special purpose machines (such as GRAPE) have been succesfully used 
but these do not seem appropriate in an evolving field like GEM while we are still in the 
prototyping phase

	General Multipole Methods:  Several methods have been introduced which 
allow N-body simulations to be performed on arbitrary collections of bodies in time much 
less than O(N2), without imposition of a lattice (Appel, 1985; Barnes and Hut, 1986; 
Greengard and Rokhlin, 1987; Anderson, 1992).  They have in common the use of a 
truncated expansion to  approximate the contribution of many bodies with a single 
interaction.  The resulting complexity is usually determined to be O(N) or O(N log N) 
which allows computations using orders of magnitude more particles.  These methods 
represent a system of N bodies in a hierarchical manner by the use of a spatial tree data 
structure.  Aggregations of bodies at various levels of detail form the internal nodes of the 
tree (cells).  Making a good choice of which cells interact  and which do not is critical to the 
success of these algorithms (Salmon and Warren, 1994).  N-body simulation algorithms 
which use adaptive tree data structures are referred to as "treecodes" or "Fast Multipole" 
methods.  By their nature, treecodes are inherently adaptive and are most applicable to 
dynamic problems with large density contrasts, while fast multipole methods have been 
mostly non-adaptive and applied to fairly uniform problems.  It is likely that this distinction 
will blur in the future, however. 

	The fundamental approximation employed by a gravitational treecode may be stated 
as:

	S j    GŹmjŹdij|ŹdijŹ|3  »   GŹMŹdiŹcm|ŹdiŹcmŹ|3  + higher order multipoles

where di cm = xi - xcm.  Here xcm is the centroid of the particles that appear under the 
summation on the left-hand side.  These Newtonian or Coulomb interactions are a special 
case of a more general formulation that can be applied to any pairwise interaction or 
Green's function.  Even short-range interactions, i.e., those which can be adequately 
approximated as vanishing outsidthe GEM collaboration have established the fundamental 
mathematical framework for determining error bounds for use with arbitrary force laws 
(Salmon and Warren, 1994,Warren and Salmon, 1995).  This error bound is essentially a 
precise statement of several intuitive ideas which determine the accuracy of an interaction.  
The multipole formalism is more accurate when the interaction is weak, when it is well-
approximated by its lower derivatives, when the sources are distributed over a small 
region, when the field is evaluated near the center of the local expansion, when more terms 
in the multipole expansion are used, or when the truncated multipole moments are small.

	Multipole Algorithms and Computational Approach:  One of two tasks 
generally takes the majority of the time in a particle algorithm: (1.)  Finding neighbor lists 
for short range interactions. (2.) Computing global sums for long-range interactions.  
These sub-problems are similar enough that a generic library interface can be constructed to 
handle all  aspects of data management and parallel computation, while the physics-related 
tasks are relegated to a few user-defined functions.  One of our primary computational 
research objectives will be to define and implement a series of generally useful, 
independent modules that can be used singly, or in combination, to domain decompose, 
load-balance, and provide a useful data access interface to particle and spatial data.  In 
combination, these computational modules will provide the backbone for the 
implementation of our own (and hopefully others) algorithms.  These algorithms imply 
very complex data structures and either direct message passing or parallel C++ (supported 
by POOMA) is the natural software model.  In contrast , the direct N body methods are 
straightforward to implement in HPF.

	Application of Fast Multipole Methods to GEM:  Fast multipole methods 
have already been applied outside the astrophysics and molecular dynamics area. In 
particular the Caltech and Los Alamos groups have successfully used it for the vortex 
method in Computational Fluid Dynamics. There are several important differences between 
GEM and current fast multipole applications which we briefly discussed in section C.7.1. 
We believe a major contribution of this Grand Challenge will be an examination of the 
software and algorithmic issues in this area, as well as showing how good modular object 
oriented design can allow a common framework across many fields.  An area of importance 
which is still not well understood in current applications will be use of spatially dependent 
time steps with smaller values needed in earthquake regions.  An important difference 
between true particles and the GEM case is that in the latter, the fault positions are 
essentially fixed in space. Of course a major challenge in astrophysics is the time dependent 
gravitational clustering of particles. It is not clear if one can exploit this simplification in the 
case of GEM.

	C.8.5 POOMA and the Overall Software Support
	
	The Parallel Object Oriented Methods and Applications framework encapsulates the 
complexities of data distribution and interprocessor communication beneath a high-level 
programming interface which maps well onto the equations being solved. The 
computational scientist, using classes representing the relevant high-level physics 
abstractions, can focus on designing numerical algorithms, running simulations, and 
analyzing results. Computer scientists and experts on parallel computing can focus 
independently on the implementation layers beneath. This independence of programming 
interface and internal workings allows applications to be optimized without changing the 
high-level code.
	Classes for representing fluid and electromagnetic fields discretized on meshes; 
classes for representing particles in monte-carlo, particle-in-cell, and molecular dynamics 
simulations; and the interactions of particles with fields on meshes have already been 
implemented in the POOMA framework. These all have parallel processing on distributed-
memory machines or SMP's included in their design; the basic unit of parallelism--the 
virtual node--provides a mechanism for load balancing (there can be, and typically are, 
multiple virtual nodes per physical compute node). Usage and extension of these classes, 
coupled with current ASCI development toward adaptively refined meshes, will facilitate 
development of the specialized Seismic framework.

	C.8.5 Implementation of Fast Mulitpoles within POOMA

	In somewhat more detail, Fast multipole techniques work by clustering the particles 
into regions of space defined by an octree.  This data structure maps well onto the POOMA 
framework's virtual nodes (see below).  Each v-node represents a region of space and 
manages the containers for whatever particles are in that region.  As particles move from 
the domain of one v-node to another they are transferred to that v-node.  If the other v-node 
is on another processor of a parallel computer, it is packed up, buffered with other particles 
moving to that processor, and communicated. This will employ technology already in use 
by the POOMA team.
	Because of the object-oriented structure of the framework, the job of handling the 
particles on v-nodes is separated from how v-nodes are created and assigned to processors.  
This will allow us to use the techniques developed by Warren and Salmon for distributing 
an octree near-optimally on a massively parallel computer and efficiently coordinating the 
communication between v-nodes.
	Much of the dipole dynamics will then be handled similarly to the molecular 
dynamics and particle-in-cell work that is currently being done with the framework.  The 
main new element here will be the inclusion of the algorithms designed by Warren and 
Salmon.  The first development atop the POOMA framework will entail casting the 
algorithms for:

	1) Calculating the layout of an octree of v-nodes across the processors of a parallel 
computer
	2) Coordinating the communication of messages between the v-nodes  
	3) Organizing the order of the computation and the replication of data so that the 
calculations may proceed in parallel as much as possible

With this infrastructure in place, capabilities specific to seismic modelling will be 
implemented including:

	1) Simulations with adaptively refined meshes,
	2) Particle-grid interactions,
	3) Fast-time behaviour,
	4) System subcycling, and
	5) Inhomogeneous modelling.

	Because the framework was designed to allow users to provide their own load 
balancing techniques, introducing these methods should be straightforward.  Building this 
sort of simulation tool on top of the POOMA framework has the potential benefit of 
allowing us to easily do grid-based calculations at the same time as particle-based ones.  
This could be useful for visualization purposes or for working with hybrid algorithms that 
supplement or modify the dipole interactions based on PDE's solved on the grids.  The 
important thing to note is that although sophisticated techniques are being developed and 
deployed to handle the parallel aspects of multipole in POOMA, the application developer is 
presented with an interface which represents a global physics structure (for example, a 
species of particles or field tensor). In this manner, the application developer is able to 
focus on the physics issues of a given model or simulation without having to manage the 
extra complexities of parallelism.

	C.8.6 I/O, Visualization, and Data Display and Analysis

	By designing I/O in terms of objects, the I/O interface for the computational 
scientist is simple and high-level. Since the high-level classes in the framework represent 
the physics and numerical abstractions of the problem domain--for example, vector fields 
discretized on a mesh of nodes--this interface is also a natural one. It also permits us to 
optimize the design of the underlying implementation details independently, without 
changing the high-level application code's interface to I/O. Our work to date has focussed 
on implementation layers using portable data formats such as Unidata's NetCDF (built atop 
special parallel I/O substrates). Having the end output in the form of collections of files in a 
standardized, machine-independent format makes it easier to use different tools for 
postprocessing such as generic reader modules for visualization systems. We will 
implement the I/O interface for the Seismic framework's classes along with the rest of the 
classes' interfaces, leveraging off our I/O development for ASCI and other applications. 
This will fit well into our ongoing collaboration with LANL ASCI scientists working on 
the general problem of parallel I/O and data sharing between large-scale simulations.

	C.8.7 Integration of the GEM Components

	In Years 2 and 3 of the Proposal, it will be essential to build the overall 
infrastructure to allow the integration of data analysis and simulations for GEM.  Giles at 
Boston and Syracuse will have major roles here. Although important, we expect that an 
approach using the Web will be very effective and require modest funding from the 
proposal. The general approach is described in two online papers (Fox et al, 1995, Fox et 
al., 1996) but we do not feel it is appropriate to go into detail as we believe that the Web is 
evolving so rapidly that detailed plans at this stage are unrealistic.  However we can 
indicate some general characteristics of this systems integration problem. The Web 
commercial industry will supply the basic collaboration environment with digital video and 
audio support of videoconferencing and a rich variety of tools including common 
whiteboards, databases etc. We can use Java to develop high level visual interfaces to the 
GEM resources and allow user to specify the necessary linkage. Protypes of this can be 
seen at http://www.npac.syr.edu/projects/webbasedhpcc/index.html at the NPAC web site.

C.9	Project Plan for the GEM Grand Challenge

	Approach:  A Problem Solving Environment for the GEM Grand Challenge 
project will be constructed to accomodate a series of increasingly realistic and complex 
levels of simulations using novel parallel algorithms.  Visualizations will be particularly 
important inasmuch as observational data sets must be overlain on simulations, and 
methods must be developed that will allow an "intuitive" approach to data analysis as well 
as more rigorous statistical methods of analysis.  With each new level in simulation 
complexity, more spatial and temporal details will be included than was the case for prior 
simulations, as well as more realistic rheologies, more detailed friction models, and so 
forth.  Thus, simpler models can be understood, evaluated and discussed before 
proceeding to the next level.  This approach will be supported by the POOMA object-
oriented framework developed at Los Alamos. The physics included in a given set of 
simulations will be discussed and decided upon by the investigator group, or large subsets 
of the investigators.  Throughout this work, close coordination will be pursued with the 
Southern California Earthquake Center, which Rundle currently advises as Chair of the 
Advisory Council of SCEC.

	C.9.1 Basic Model

	Level 0 simulations will be built upon constructing algorithms for the following 
model sets:

	1) Major fault systems set in a "meanfield" background of smaller secondary faults, 
		with reference to the most recent observed fault geometries.  Fault geometry 
		will not evolve in time initially.  Faults will be predominantly vertical strike 
		slip faults for southern California
	2) Linear rheologies (isothermal elastic and viscoelastic) for the
		upper crust of the earth.
	3) Simple algorithms for implementation of friction models, including Cellular 
		Automaton, Traveling Density Wave, and Rate and State friction
		Note that this will still allow a moment release function to be computed for 
		simulated earthquakes.
	4) Neglect of inertia in the sliding process.
	5) Neglect of faults below a spatial scale to be determined

Most of these assumptions are the same or very similar to the 80 fault model as  
implemented in simulations carried out by Rundle (1988). 

	Level I and higher level simulations will include the following to varying 
degrees.  Note that all of these additional effects involve questions of science that to 
varying degrees are still fundamental research questions.

	1) Evolving fault geometries: shear and tensional fracture problem
	2) Different and more realistic frictional physics
	3) Inclusion of the dramatically increasing number of smaller fault
		systems as a kind of "noise" in the simulations
	4) Inclusion of radiative effects:  Inertia and elastic waves
	5) Inclusion of other rheologic effects, including pore fluids, 
	6) Thermal effects, and possibly chemical processes, particularly as related
		to stress corrosion

	Scientific and Computational Problems:  In all of the simulations and model  
activities, the GEM team will focus on a basic set of scientific and computational problems  
as listed below.  These are generic problems which are applicable to all GEM type  
simulations, thus they will recurr repeatedly.

	Generic GEM scientific problems include:

	1) Developing the procedures and mechanisms for using a General Earthquake  
Model (GEM) methodology for use as a focus in organizing the data collection and  
interpretation activities in the field of earthquake science. 

	2) Evaluating whether forecasting of earthquakes at some level is at all possible,  
and how this depends on the length and time scales upon which the events are to be 
forecasted.  

	3) Developing a quantitative understanding of fault interaction and screening at  
various scales, and of the appearance of cooperative effects on members of fault systems, 
including earthquake triggering and self-organization phenomena.  

	4) Examining the roles played by quenched and annealed disorder, as well as the  
effects of random fluctuations, in determining the dynamical evolution of slip on faults  
embedded in the elastic and inelastic materials that comprise the earth's crust.

	6) Evaluating the applicability of proposed nucleation mechanisms for describing  
precursory effects, aseismic slip, scaling, and seismicity patterns associated with  
earthquakes.

 	8) Elucidating the fundamental equations that describe the time-dependent nonlinear  
processes associated with earthquakes, fault creep,evolution of fault geometry and stress 
transfer in the earth's crust. 

	Generic GEM computational problems involve:
	
	1) Development of  appropriate algorithms and approximations capable of 
simulations on parallel computers of an interacting fault system down to a 10 - 100 meter 
length scale.

	2) Development of the Problem Solving Environment including advanced 
visualization techniques that allow both rigorous statistical comparisons with data, as well 
as more intuitive and visual comparisons.  This will use commercial Web technology to 
support collaboration.

	3) Development of a fast multipole or equivalent algorithms for including  
interactions between fault segments in a computationally efficient manner.  These  
algorithms must allow numerical solution of very large numbers of nonlinear coupled  
equations using a geographicall distributed, heterogeneous computional environment based  
on a shared memory configuration.  Other novel computational modes may be considered  
as appropriate.

	4) Development of Object-Oriented techniques to allow rapid prototyping with a 
variety of different approaches and models. These will focus on the use and extension of 
Los Alamos's POOMA framework.

	5) Evaluation of HPCC software methods and systems using benchmarking on a 
variety of different architectures.

	C.9.2  Proposed Tasks and Research Foci by Year

	Category I Tasks:  Science: Earth Science & Statistical Physics Tasks

Year 1: Major Activities:  

	1) Collecting and setting up (in object oriented code) the quasistatic Green's 
functions (linear elastic, viscoelastic, and to a lesser extent poroelastic), so that spatial (and 
later temporal) multipole representations of them can be established

	2) Using the existing data base to establish the basic model parameters, including 
major fault geometries, slip rates, earthquake histories, and seismic and aseismic activity on 
the faults.  An important decision will be the number and nature of the blind faults (no 
surface expression) to be included.

	3) Establishing the basic specifications for Geographic Information System-type 
overlays of simulation output with data

	4) Analyzing and predicting the physical processes to be expected from the various 
meanfield versions of the model geometry, including screening and frustrated ordering 
effects

	5) Consideration of the characteristics of the noise to which the faults are subjected 
due to unmodeled small scale faults, for example: a) geometric (representation of major 
faults can only be correct down to some scale), b) temporal (smaller faults can easily be 
created as new fractures in the medium, or unmodeled time-dependent processes in the 
crust), c) loading (space and time variations in the effective loading rate of the crust, 
whether real or induced).  The nature of the noise, and whether it is correlated or 
uncorrelated, will have a major impact on the dynamics of the problem.

Year 2: Major Activities:  

	1) Testing and initial validation of simulated fault systems, by using initial 
comparisons with field data.  In particular, evaluation of adaptive multigrid techniques for 
implementation of time evolution of failing cascades of fault segments during earthquakes, 
and initial comparison to source time functions of real events.

	2) Evaluation and comparison of the effects expected from the basic friction laws 
discussed in section C.5, emphasizing in particular the Traveling Density Wave, Cellular 
Automaton, and Rate and State friction models.

	3) Analytical and numerical analyses of excitatory or inhibitory interaction of fault 
segments, and how these interactions might lead to enhancement of seisimicity or  
screening effects, as well as the tendency to eventually modify the geometry of the fault 
system by means of fresh rock fractures

	4) Pattern evaluation and analysis for simple fault systems using phase space 
reconstruction techniques, together with graph analysis for failure (avalanche) trees.  
Application of scaling and renomalization group analysis to random field approximation of 
fault loading and interaction.

	5) Evaluation of effects of non-planar fault geometries, in particular, how to modify 
Green's functions, presumably using an adaptive multigrid approach, to account for fractal 
nature of fault surfaces, and analysis of effects of non planar structures on seismicity 
patterns. 

Year 3: Major Activities:

	1) Evaluate requirements for visualization of simulation results, emphasizing in 
particular Geographic Information Systems-type overlays of simulation results with 
observational data

	2) Develop protocols for calibration and validation of full-up simulation capability, 
in terms of numerical benchmarking, scaling properties of models, and field data needed to 
evaluate models

	3) Organize and sort field data used for validation, including seismicity, surface 
deformation from Southern California Integrated GPS Network and Synthetic Aperature 
Radar Interferometry, source process data fromo TERRASCOPE and strong motion 
instruments, paleoseismic observations of regional earthquakes, and other data including 
water level data, in situ stress, and so forth.

	4) On the basis of simulations, discuss, evaluate, and develop new hypotheses to 
be tested with new data collection activities.  Data collection to be undertaken by scientists 
at CalTech, Southern California Earthquake Center, UCSD, Cornell, MIT and Colorado, 
under other proposals to federal agencies.  In other words, develop and promote 
collaborative modes of activities and scientific interactions as envisioned in GEM approach, 
focusing particularly on SCEC scientists.

	5) Define requirements for future levels and further refinements of GEM simulation 
technology

	6) Transfer technology to third party users as appropriate through WWW 
interfaces, and publications.

	Category II Tasks: Computational Science

Year 1: Major Activities:

	1) Develop a flexible object-oriented environment based on POOMA which will 
support current and expected GEM needs

	2) Analyse the specification of GEM Information (geometry and degrees of freedom 
of faults) so that we can design appropriate data structures needed in databases and 
software for GEM.

	3) Examine the base visualization and I/O requirements of GEM as well as 
collaboration support needed. Use this to design and develop prototype Problem Solving 
Environment based on Web Technologies

	4) Develop prototype integrated systems for the better understood and simpler GEM 
computations such as the Cellular Automata approach.

	5) Develop Parallel versions of the Statistical Mechanics and Cellular Automata 
codes and examine the difficult load balancing issues 

	6) Study and prototype the software needed to support the fast multipole method 
with the changes needed by GEM compared to better understood gravitational problems.

Year 2: Major New Activities:

	Many of the activities, including refinement of POOMA, will continue from the 
previous year. We also expect continued rapid change in available commercial Web 
Technologies and will incorporate them as appropriate.

	1) Develop and use in simulations a simple brute force O(N2) Travelling Wave 
solution system with fixed (in time) spatial resolution (which can however vary with 
position in fault system) and fixed position independent time steps. Extend this to variable 
time and space resolutions and to the well known O(N2) parallel algorithms.
	
	2)Test the first parallel multipole schemes with benchmarking on a variety of 
machines

	3) Incorporate in multipole solver on an ongoing basis, the technologies (friction 
laws, multiresolution time steps etc.) tested in the simpler simulation systems. Continue to 
interact with POOMA to ensure that the same modules can be re-used in a variety of 
simulation approaches.

	4) Integrate the simpler simulation approaches into a full operational Problem 
Solving Environment supporting distributed simulation, data analysis and visualization.

Year 3: Major New Activities:

	1) Develop Operational Parallel Fast Multipole System in terms of a full capabilities 
of Problem Solving Environment.

	2) Investigate and produce prototypes of a full time dependent multipole method

	3) Examine implications of new architecture developments in HPCC

C.10	Individual Work Statements

	John Rundle, University of Colorado:  As Co-Principal Investigator, 
Rundle will work with Fox to lead, manage and coordinate the scientific activities described 
in this proposal.  Both Rundle and Fox will collaborate closely with all of the investigators 
to develop and exploit the GEM simulation technology. As part of the management and 
coordination acitivities, a major workshop will be organized each year, to be held at a 
centrally located site in Boulder, Colorado, Santa Fe, NM, or Los Alamos, NM, which all 
of the investigators will attend to discuss and coordinate results, and to plan future work.  
Additionally, this will be an open meeting that any interested scientist may attend, and 
previous experience indicates that it may turn into a national workshop on earthquake 
modeling.  There will also be smaller workshops in the New England area, and in 
California to strengthen coordination and collaborations among the investigators in those 
areas.  Rundle, Jordan, and Minster will also coordinate activities with the Southern 
California Earthquake Center, and Minster, Rundle and Turcotte will coordinate activites 
with NASA efforts directed at developing and acquiring Interferometric Synthetic Aperature 
Radar data for the Southern California area.  Rundle is currently chair of the Advisory 
Council of SCEC, Jordan is a Council member, and Minster is Vice Chair of the Board of 
Directors of SCEC.  Minster, Rundle and Turcotte are heavily involved in team activities 
for planning an interferomentric SAR mission in the near future.
Year 1 Activities:  Rundle's group will collaborate on parallelizing Green's functions, 
based on standard elastic (e.g., Okada, 1992), viscoelastic (Rundle, 1982a, 1988) and 
porous elastic Green's functions (Biot, 1962; Bowen and Wiese, 1969;  Rice and Cleary, 
1976; Cleary, 1977; Rundle, 1982b; and others).  Rundle will also participate in load 
sharing calculations, and will work with Fox on multipole algorithms, by e.g., supplying 
multipolar terms from various Green's functions.  In collaboration with Klein, Rundle's 
group will focus a major part of their activity on the evaluation of screening effects, in 
particular, how these will modify the multipole expansions used to propagate stress from 
on fault to another. 
Year 2 Activities:  Continue work on above, as well as collaborate extensively with Fox 
and others to parallelize 1) Traveling Density Wave; 2)  Cellular Automaton; and 3) Rate 
and State friction laws.  For all of the friction laws, this task will entail a careful analysis of 
the mechanics of implied space-time correlation of events, and the patterns that cascades of 
failing sites display during an earthquake, so that appropriate multigridding techniques can 
be developed.  The primary consideration will be to devise means to regrid areas of faults 
as needed, to preserve as much detail of the space-time dynamics as possible.
Year 3 Activities:  The development of Calibration/Validation & Visualization Software will 
be the major focus of activities in year 3.  Rundle's group, in collaboration with Stein, 
Kanamori, Minster, Jordan and Turcotte, as well as Fox and the ACL computational 
group, will use data plus calculations, to design and implement criteria for the construction 
of visualization techniques.  Rundle will coordinate the group's activities to carry out 
systematic comparisons to 1) statistics of seismicity; 2) surface deformation data from both 
Southern California Integrated GPS Network (SCIGN) and Interferometric Synthetic 
Aperature Radar (ISAR); 3) source process observations from both strong motion 
recordings and TERRASCOPE data as available; 4) paleoseismic observations of 
earthquakes on regional faults; as well as 5) other data such as water well levels (fluid 
pressure levels), in situ stress data, creepmeters, strainmeters, GPS campaigns, and so 
forth.  
The requested funding for these activities will include a "geophysical postdoc", a 
"computer postdoc", a student, and and administrative assistant for help in organizational 
and communication issues.  Emphasis will be on postdoctoral level workers who can help 
contribute to the success of GEM on the short time scales required.

	Geoffrey Fox, NPAC Syracuse University:  As Co-Principal Investigator, 
Fox will work with Rundle to coordinate the group activities with especial responsibity for 
the computational science tasks. This will draw on Fox's extensive experience with such 
interdisciplinary work both at Syracuse and Caltech.
Year 1: A critical task will be to understand how the multi resolution multipole approach 
should be integrated into the GEM computational environment. Fox's group will work with 
both Rundle and the Los Alamos/Caltech group who will be largely responsible for base 
software. We will also be responsible for systems integration of the various tools for 
different models and the development using Web Technologies of a customized 
collaborative system. We will not develop any base Web technologies but use existing 
commercial database and collaboration capabilities with which we have extensive 
experience from other NPAC projects.
Year 2:  The focus will be on implementation and integration of the full system with 
multipole and cellular automata capability. We will also study implications of time 
dependent multipole methods from both software and algorithmic point of view.
Year 3:  The focus will involve substantial use of the full GEM analysis, and the system 
integration issues will involve interacting with all investigators including the data and 
simulation visualization. The requested staffing includes some funding for Fox; a postdoc 
and student who will work on the computational science with software and parallel 
algorithm experience. These activities will include appropriate benchmarking and MPP 
architecture evaluations so that we can draw lessons on MPP Systems from the GEM 
application. There is funding for technical support (.33 FTE) for supporting the Web 
aspects of the collaborative environment.

	Advanced Computing Laboratory, Los Alamos National Laboratory 
(John V. W. Reynders, Timothy J. Williams, Steve R. Karmesin, and 
Julian C. Cummings, Michael Warren)  The work to be carried out by this group 
divides into three distinct areas.  These efforts will continue throughout the three year 
period of the proposal.
	Object-Oriented Framework Development Activities:  We propose to design and 
implement an object-oriented Seismic framework of layered class libraries for high-
performance predictive seismic simulation. This effort will build upon the Parallel Object-
Oriented Methods and Applications (POOMA) framework at the Advanced Computing 
Laboratory.  
	Fast-Multipole Implementation Activities:  The ACL group will focus a large share 
of their activies on implementing the Fast Multipole computational approach within the 
POOMA framework.	Warren in particular will assist in the analysis and development of 
time and space adaptive fast multipole algorithms for GEM.  Initial analysis will include the 
determination of multipole error bounds for the particular Green's functions involved in the 
interactions, based on the techniques developed in Salmon and Warren (Salmon and 
Warren, 1994).  Software development will primarily focus on extensions to the fast 
multipole method to accomodate features which are particular to GEM, based upon the 
work described in Warren and Salmon (Warren and Salmon, 1995).  Techniques to allow 
an efficient parallel implementation of a multipole method allowing variable time steps will 
be developed, Additionally, advanced load-balancing techniques to accommodate the 
varying time and space resolutions in the problem will be investigated.  This work will 
begin in the first funded year of the proposal and will continue throughout the project.
	I/O, Visualization, and Data Analysis Activities:  In addition to providing a high-
level I/O interface, our object-oriented framework will provide mechanisms for run-time 
visualization of simulation data. Visualization specialists at the ACL are interested in 
examining the Seismic framework for applications of their parallel rendering algorithms. 
We propose to work with them through this avenue on runtime visualization of simulation 
data in the Seismic framework, as well as through the avenue of portable output files for 
postprocessing; we have already made progress with the POOMA framework on 
postprocessing using our current parallel I/O implementation.
	Both for postprocessing (via I/O mechanisms) and runtime data analysis, aside 
from visualization, the framework approach has merits we propose to exploit. We propose 
to develop analysis methods, both specific for the seismic simulation data and general for a 
wider class of scientific data. In the same way that the framework's high-level 
programming interface maps well onto solving equations numerically, it maps well onto 
data analysis and reduction algorithms such as spectral transforms and spatial averages. 
This work will leverage off other POOMA developments, such as the fast Fourier 
transforms developed for particle-in-cell simulations.

	James Crutchfield, Santa Fe Institute:  Crutchfield will collaborate with 
Klein, Rundle and Fox to extend recent results from dynamical systems theory to the GEM 
simulations.  Crutchfield's new approach to computation in dynamical systems --- 
computational mechanics (Crutchfield, 1992; Crutchfield and Young, 1988) --- suggests 
several novel methods to analyze and visualize earthquake dynamics. Crutchfield's work 
will focus on both computational methods and algorithm development for simulation and 
analysis of earthquake modeling and prediction. One goal is to develop a range of 
visualization methods that parallel and complement the simulation and analysis methods.
	In the analysis of nonlinear processes work to date has focused on statistics such as 
fractal dimension, metric entropy, and related scaling properties. Crutchfield recently 
introduced a new, but complementary observable---the statistical complexity---which is a 
measure of the amount of memory used by a process to produce its behavior.(Crutchfield, 
1992; Crutchfield and Young, 1988)  It should be quite helpful in characterizing new 
aspects of earthquake dynamics and its prediction. The estimation of this new observable, 
and several allied statistics, will be provided by the appropriate software libraries.
	Crutchfield will also adapt epsilon-machine reconstruction, an automated modeling 
method that can reconstruct continuous-state difference/differential equation models from 
data series. The goal is to implement several methods for epsilon-machine reconstruction 
with an eye toward allowing users to select which is most appropriate for the given field 
data or simulation data. Obviously, these kinds of data have different characteristics. For 
example, data from simulations can be much more extensive, assuming that one invests the 
computation effort to produce it. This is typically a benefit, since statistical estimates can be 
greatly improved. But making use of such data sets brings up issues of optimally using 
large amounts of data.(Crutchfield, 1992; Crutchfield and Young, 1988; Crutchfield and 
McNamara, 1987; Crutchfield and Hanson, 1993)
	There are several novel methods that Crutchfield and Hanson have developed, 
based on their computational mechanics analysis of spatial systems, that will be adapted to 
visualize the dynamics of earthquake models.(Crutchfield and Hanson, 1993; Hanson and 
Crutchfield, 1992) There are three goals in this.
	The first is to adapt the nonlinear spatial filtering methods they've developed to 
track localized disturbances. These methods allow for a great reduction in the complexity of 
visualizing the temporal development of patterns in one and two spatial dimensions.
	The second goal is to develop software tools that will allow analysts to build up the 
attractor-basin portrait for the earthquake models. This will give a way to summarize the 
spatio-temporal dynamics in terms of their qualitative dynamics. The GEM model, being 
spatially-extended collection of relaxation oscillators, poses interesting problems to 
conventional dynamical systems modeling methods. Several of these have been addressed 
by Crutchfield's recent work on the attractor-basin portraits of spatial 
processes.(Crutchfield, 1988; Crutchfield and Kaneko, 1988)  The question now is how 
this new perspective can elucidate the underlying mechanics particular to the GEM model.
	The third goal is to merge wavelet analysis methods, which are appropriate for 
localized space-time events such as earthquakes, with computational mechanics. The hope 
is that the combined method will allow for a function basis that is not susceptible, as are 
Fourier methods, to the artifacts of multiple time-scale bursting dynamics. Access to the 
new method will be provided via software and graphical display libraries that are integrated 
into the overall simulation package.
	These tasks will begin in year 1 of the proposed work and continue on through 
temination of the project.  In particular, simulations developed in years 2 and 3 will be used 
as they become available.

	Roscoe Giles, Boston University:  In Year 1 of the proposal, Giles will 
work with Klein and Rundle to develop and extend theoretical models for earthquakes, and 
upon analysis methods.  His particular emphasis will be on the computational realization of 
these models on high performance parallel computers, including the distributed shared 
memory architectures available at Boston University's Center for Computational Science.  
An important goal is to be able to begin preliminary evaluations of methods to validate the 
models using the various southern California databases.  Giles will work with York, Klein, 
Stein, Kanamori, Minster, Jordan and Rundle to comparing model results to the SCEC and 
USGS databases.  Giles will also begin development of a common WWW/JAVA user 
interface for internal dissemination of all our work within the collaboration. 
	In Year 2, Giles will work with Klein and York on incorporating additional features 
such as surface roughness and noise into the models. He will work toward the integration 
of statistical and CA simulations into a common object oriented framework.  He will also 
examine the scalability of 3D models.
	In Year 3, Giles will continue the activities described above, and will in addition 
focus on refining the techniuqes needed for comparison with observational data and for 
refinement of the model and WWW interface.

	Thomas Jordan, Massachusetts Institute of Technology: Jordan's  
research will focus on the phenomena of  "slow" earthquakes and their  implications for 
earthquake nucleation and fault friction in the context of  GEM simulators.  In this activity, 
he will collaborate in particular with Kanamori and Rundle.  Slow earthquakes -those 
events whose characteristic velocities are small compared to ordinary events- have been 
observed in a variety of  tectonic environments, including southern California (Kanamori, 
1989; Kanamori  and Hauksson, 1992) and they are very common on the oceanic ridge-
transform  system (Kanamori and Stewart, 1974; Okal and Stewart, 1982; Ihmlˇ and  
Jordan, 1994).  Slow events are especially interesting because they appear to  precede and 
perhaps initiate some large earthquakes (Kanamori and Cipar, 1974); Cifuentes and Silver, 
1989; Ihmlˇ et al., 1993; Ihmlˇ and Jordan, 1994). The questions to be addressed are as 
follows:  (1) Can slow earthquakes be  generated on faults governed by laboratory-derived 
friction laws or by other  friction laws used in GEM?  (2) If so, over what range of 
parameters is this  behavior observed?  (3)  If not, what other types of instabilities might be  
capable of generating such ruptures and how could they be incorporated in GEM? (4) What 
role do slow events play in the earthquake initiation process?   Jordan's work will tie in 
with that of Marone and other investigators to  assess the role of friction laws in GEM.   

	Hiroo Kanamori, California Institute of Technology:  Kanamori will 
work with Rundle, Jordan, Turcotte and Marone to investigate crustal processes leading up 
to fault instability.  In particular, he will investigate processes that involve fluids, such as 
rectified diffusion.  Triggering of earthquakes by seismic waves generated by earthqukes 
from different segments of fault will play an important role in controlling the temporal and 
space variations of seismicity.  In particular, strong shaking caused by a major earthquake 
will influence seismic activities in the adjacent areas.  He will investigate several examples 
of such triggered seismicity, and interpret them in terms of the numerical simulations to 
determine the existence of seismicity patterns.

	William Klein, Boston University:  In the first year of the grant Klein will, 
in conjunction with Rundle and Giles, be involved with the  theoretical analysis of fault 
interactions as described in section ?? of the proposal. This will involve applications of 
renormalization group techniques as well as ideas such as Supersymmetry to the long range 
interaction slider block models as well as the Traveling Density  Wave model.  In addition, 
Klein will be involved with York, Rundle, Giles, Reynders, Warren and Krogh  in 
building models of fault interaction based on an analysis of data from SCEC databases.
	In the second year Klein will continue with the theoretical effort while also working 
with Giles and York to investigate the effect of adding  noise, the way plates are updated 
and surface roughness to the model.  These investigations will be done in parallel with a 
theoretical investigation of the same phenomena.
	In the third year of the grant Klein will be involved with the evaluation of the model 
data and its comparison with measurements of similar parameters on real faults. In addition 
he will be part of a subgroup whose task is to use the results of the theoretical and 
modeling effort to suggest new measurements for the seismologists and experiments for 
laboratory scientists.

	Karen E. Carter Krogh , Los Alamos National Laboratory: Carter 
Krogh will be involved in the initialization of the proposed earthquake models by 
incorporating actual field data which encompass the morphology, geometry and kinematics 
of active faults in the study area.  Specifically, a three-dimensional model of seismically 
significant faults will be generated in collaboration with Rundle, Kanamori, Turcotte and 
Jordan by incorporating both observational and derived data.  Data will include both 
surface and subsurface faults.  Whereas surface faults are visible and measurable, the 
character of subsurface faults will be modeled through the use of geophysical data.  
Working with Rundle and Minster, Krogh will correlate information such as seismic, GPS 
and field data, and match it to fault surfaces.  Once the computational model has been 
constructed, simulations on that model will be constrained, in part, by local and regional 
stresses. Local stresses are influenced by the spatial distribution and geometries of 
seismically significant faults. Additionally, the geometry and morphology of secondary and 
minor faults will be analyzed as indications for local stress states.  Geologic methods will 
be developed to enable the incorporation of such unmodeled minor faults as background 
"noise" for the major faults in the simulations. Regional stresses, that are invariant over the 
proposed crustal scale for short time steps, have been determined by previous studies and 
will be included.  Similarly, in collaboration with Minster and Jordan, Krogh will 
document kinematic information on active faults and will used it to constrain the tectonic 
framework within which the computational earthquake models will function.  Results from 
the model will be validated by comparing simulated data to observational data.  For 
example, new surface ruptures over the past decade can be compared with those simulated, 
for that time step in this model.  This work will begin in the first funded year of the 
proposal and will continue through the analysis and validation phases of the project.

	Chris Marone, Massachusetts Institute of Technology:  Marone will 
work to incorporate and test laboratory-based friction laws within GEM and to study the 
role of friction in simulations of earthquake physics.  The work will fall into two main 
areas: 1) incorporation of existing rock friction laws into computational models of 
earthquakes, and 2) investigation of the  problem of scaling laboratory results to earthquake 
faults and fault systems. Traditionally, cellular automata and spring-block models of 
earthquakes have used extremely simplifed friction laws.  This is justified by the 
computational efficiency of using such laws.  However, in some cases these laws omit 
critical features of rock friction.  Thus, an important question is, what effect does this have 
on model output, and what other parameters of the simulation are impacted by the use of a 
given friction law?  Because of the relative  simplicity of the current generation of GEM 
simulators it is important to begin testing realistic friction laws and calibrating the effects of 
simple friction  laws. In particular, Marone will work with Minster and Shkoller to better 
connect GEM results  with laboratory-derived friction laws and related earthquake 
modeling studies. Marone will also study the scaling problem via laboratory experiments 
and  numerical studies.  The most important scaling parameter is the critical slip  distance 
(Marone and Kilgore, 1993), which enters in either slip-weakening or  rate and state 
friction laws.  Marone will use laboratory-based scaling models  to investigate earthquake 
nucleation (Roy and Marone, 1996) and will  collaborate with Jordan to study the 
phenomena of slow earthquakes in the  context of rate and state friction laws and GEM 
simulators.   

	J.-Bernard Minster, University of California, San Diego:  Minster, 
together with a postdoctoral fellow (S. Shkoller), will work to formulate rate and state 
dependent friction laws in the form of partial differential equations that are compatible with 
constraints imposed by a variety of theorems relating to the dynamics of nonlinear 
dissipative systems.  Recently, they have shown that the dynamical system describing 1D 
frictional sliding, using the generally accepted class of state and rate-dependent  friction 
laws, displays deterministic chaos with attracting sets of fractal dimension.  This result is 
largely due to the recent discovery that hyperbolic attractors persist in a finite dimensional 
phase space under the small perturbation of numerical time integration (Pliss and Sell, 
1991).  Since a physically-based law for sliding of the earth's faults will require a complete 
3D model described by a dissipative partial differential equation in infinite dimensional 
phase space, it is of the utmost importance to generalize the perturbation result to 
overflowing invariant manifolds contained in this infinite- rather than finite-dimensional 
setting.  Minster and Shkoller will thus work with Marone from MIT to reformulate these 
friction laws for GEM simulations, making use of constraints imposed both by laboratory 
data and theory.  In addition to these activities, Minster will also work on merging current 
and anticipated future SAR Interferometry data with the GEM simulations as described in 
section C.7 on data validation issues.

	CACR Group (John Salmon & Paul Messina), California Institute of 
Technology:  As part of the GEM project, John Salmon will work with Earth Scientists 
(Rundle, Kanamori, Jordan, Minster, Turcotte) to implement parallel "Fast" algorithms for 
the evaluation of stress fields from fault displacements.   Initially, in the time-independent 
formulation of the Traveling Density Wave Model, the stress field induced by a fault is 
represented by a Green's function with a long-range component, i.e., a field for which it is 
not mathematically permissible to neglect interactions beyond some radius.  Salmon's 
previous work in astrophysics and vortex dynamics uses "Fast" summation methods and 
oct-tree data structures to evaluate similar Green's functions for Newtonian gravity and the 
Biot-Savart law.  Warren and Salmon have developed a library which separates the 
"physics" from the "computer science", and which allows one to alter the interactions, error 
tolerances, etc. without worrying about the underlying parallel computational mechanisms 
(Warren and Salmon, 1995).  Salmon's primary task will be to work with the Earth 
Scientists to allow their codes to use this library.  This will have two important 
consequences: the codes will run in O(N log N) time or O(N) time, and they will run 
efficiently on a wide variety of shared and distributed memory parallel computers.  Further 
refinement of the physical model will require introduction of spatial and/or temporal 
dependences into the Green's function, modeling non-uniform physical characteristics of 
the earth and the radiative time-dependence of the stress field, respectively.  Here it is likely 
that the object-oriented framework supplied by POOMA will be an important asset.  It will 
then be necessary to either create an interface between POOMA and the tree library, or to 
reimplement some of the "Fast" methods supplied by the tree library in POOMA itself. 
Salmon's role will be to work closely with both the POOMA team and the Earth Scientists 
to find the most efficient methods of implementing these advanced techniques.

	Ross Stein, United States Geological Survey:  Large spatial datasets 
provide critical input for testing models. Such data includes three-dimensional displacement 
vectors for hundreds of points on the earth surface (Global Positioning System and Very 
Long Baseline Interferometry); baseline changes between a radar satellite and millions of 
points spaced 30 m apart on the earth's surface (Synthetic Aperture Radar); and positions, 
geometries, and fault-slip information on the several thousand active faults and folds longer 
than 8 km in the western U.S. Such data must be easily interrogated and changed as errors 
are uncovered and modifications are made. Model output includes calculations of stress and 
stress change, fault friction, fault slip and earth displacement at the earth's surface and also 
at depth. Output datasets will exhibit a strong time-dependence and thus will be frequently 
updated, compounding the handling problems. These very large spatial and temporal 
datasets must nevertheless be accessible to develop physical insight. The visualization 
needs of this project will thus overwhelm existing desktop-type Geographic Information 
System (GIS) codes such as ArcInfo or MapInfo, and matrix manipulation and display 
codes such as Transform and Dicer. In addition, remote collaboration by the many 
investigators on this project means that the visualization tools must be client-server based.
	Stein will work with Reynders, Williams, and the LANL/ACL group, and in 
addition with Giles and York to create a database with all known active faults longer than 8 
km in the western U.S. Distributed deformation at fault bends, break, and ends will be 
implemented as well to avoid singularities and artificial instabilities. Stein will also work 
with Rundle and Minster to develop an inventory of historical earthquake sources with slip 
distributions (where known), slip histories (where available), and the net slip distribution. 
Procedures to download and display continuous and survey-mode GPS and other surface 
displacement data will then be incorporated. Output datasets must be converted to 3D 
animations in which one can control and adjust the time, and three-dimensional spatial 
range of the output easily for inspection.

	Donald Turcotte, Cornell University:  Turcotte will focus on two areas in 
collaboration with Rundle: 1) relating the stresses driving slip on the simulated fault 
systems to the underlying mantle dynamics, and more extensively, 2) validating 
simulations with seismicity data and Interferometric Synthetic Aperature Radar data.  With 
respect to the first activity, Turcotte will draw on his long association with the mantle 
convection and dynamics community to assess the nature and patterns of crustal stress in 
the simulations, and estimate the extent to which these accurately represent the magnitude 
and orientation of stresses arising from mantle flow.  
	With respect to the second area, a primary data set is the spatial and temporal 
distribution of seismicity.  High resolution data sets are clearly preferred and for this 
reason, comparisons will emphasize regions where many earthquakes occur and where 
seismic networks archive quality data over a relatively wide range of magnitudes.  For 
these reasons, aftershock sequences are particularly attractive for study.  Seismic networks 
in northern and southern California also give excellent quality data with particularly high 
resolution data available for the Parkfield section of the San Andreas fault in Central 
California.  Although these areas have the highest priority, it will also be important to 
compare patterns of seismicity there with those observed in other areas of the United 
States, including the Pacific Northwest, the Wasatch Fault, the New Madrid seismic zone, 
and seismic zones in the eastern United States.  
	While seismic data are a primary data source, the great improvements in geodetic 
observations are providing complementary data.  GPS data have provided sub-centimeter 
resolution on occupied geodetic monuments for five to ten years.  However, the density of 
monuments is still too sparse to provide the desired resolution.  Recent studies using radar 
interferometry have yielded sub-centimeter accuracy with a 50 meter (pixel) resolution in many 
regions.  With monthly or even weekly repeat passes, this data can provide high resolution, 
extremely accurate spatial coverage of stress and strain release, coseismically, aseismically, or with 
continuous and discontinuous creep.  In particular, Turcotte, Rundle and Minster are on the NASA 
ESSP pathfinder SAR team, and they will plan for the integration of that data into GEM models in 
the future.

	Bryant York, Northeastern University:  In the first year  York will focus on 
the development of parallel/distributed algorithms for the extension to three dimensions of 
cellular automata (CA) based earthquake models, such as the well-known "slider block" 
model.  Initially he will consider the problem of describing 3D fault geometries for multiple 
faults in a manner which allows them to be easily distributed and computed with minimum 
communications delays.  The two key issues here are: (1) how to define and implement 
variable-resolution gridding, and (2) whether or not it is feasible to treat the 3D model as a 
network of 2D slider-block models in which distant blocks are connected through an elastic 
medium.  In order to support this work, York will begin a careful analysis of data from 
several SCEC databases in conjunction with Rundle and others.  York will be on sabbatical 
leave during academic year 1997 - 1998 and plans to spend that leave at the Boston 
University Center for Computational Science.  This sabbatical will afford York substantial 
time to work closely with Giles,  Reynders, Williams, Warren, Salmon, Klein, Fox and 
Rundle on several aspects of the GEM project which complement the work on the 
development of object-oriented modeling environments for earthquake physics put forth in 
the earlier proposal.
	In the second year York will work with Fox, Giles and Klein on the addition of 
several novel features to CA-based earthquake models.  In particular, we will look closely 
at the incorporation of surface roughness and fractal dimension parameters as well as the 
systematic incorporation of noise.  We will look into the "plate update" scheme and 
investigate model dependence on the order of block evaluation.  We will be particularly 
concerned with parallel block update, correlated block movement, and continuous 
deformation models for connected blocks.  A careful sensitivity analysis will be done of all 
the parameters and we will begin to analyze the scalability characteristics of the enhanced 
3D models.  
	In the third year York will work with Giles, Klein and Stein to incorporate time 
renormalization in a generic way into the GEM modeling system and to develop tools for 
evaluating model performance against recorded data.  
	Salary support for York is requested for 2 months of summer each year and 1/2 
salary for academic year 1997-98 while York is on sabbatical.  The sabbatical salary will be 
spread across two grant years.  No graduate student support is requested as York has a 
fellowship student working with him.  Request for undergraduate programming support 
was included in the earlier proposal.

D.	Letter of Support from David Jackson, Science Director for
	the Southern California Earthquake Center

E.	BIBLIOGRAPHY

 Aki, K. and P.G.  Richards, Quantitative Seismology, Theory and Methods, vols I and II, 
W.H. Freeman, San Francisco, 1980.
Aki, K., Magnitude-frequency relations for small earthquakes: A clue to the origin of fmax 
of large earthquakes, J. Geophys. Res., 92, 1349-1355, 1987.
Allegre, C., J.L. Le Mouel and A. Provost, Scaling rules in rock fracture and possible 
implications for earthquake prediction, Nature, 297, 47-49, 1982.
Allegre, C.J. and J.L. Le Mouel, Introduction of scaling techniques in brittle fracture of 
rocks, Phys. Earth Planet Int., 87, 85-93, 1994.
Allegre, C.J., J.L. Le Mouel, H.D. Chau and C. Narteau, Scaling organization of fracture 
tectonics and earthquake mechanism, Phys. Earth Planet Int., in press, 1996. 
Allen, M.P. and D.J. Tildsley, Computer Simulations of  Liquids, Clarendon Press, 
Oxford, 1987.
Amit, D.J., Field Theory, the Renormalization Group, and Critical Phenomena, World 
Scientific, Philadelphia, 1984.
Anderson, C.R., SIAM J. Sci. Stat. Comp., 13, 923, 1992.
Appel, A W. , An efficient Program for Many-Body Simulation, SIAM J. Sci. Stat. 
Comp., 6, 85, 1985.
Appel, A.W.,SIAM Journal on Computing., 6, 85, 1985.
Arrowsmith, D.K. and C.M. Place, An Introduction to Dynamical Systems, Cambridge 
University Press, Cambridge, U.K., 1991.
Atkinson, B.K., Fracture Mechanics of Rock, Academic Press, London, 1987.
Bak, P. and C. Tang, Earthquakes as a self-organized critical phenomenon, J. Geophys. 
Res., 94, 15635-15638, 1989.
Bakun, W. and T.V. McEvilly, Recurrence models and Parkfield, California, earthquakes, 
J. Geophys. Res., 89, 3051-3058, 1984
Bakun, W.H. and A.G. Lindh, The Parkfield, California prediction experiment, Earthquake Pred. 
Res., 3, 285-304, 1985.
Barnes, J. and Hut, P., Nature, 324, 446, 1986.
Barnes, J.E.  and P Hut, A Hierarchical (O (N logN) ) Force-Calculation Algorithm, 
Nature, 324, 446-449, 1986.
Barnes-Hut Algorithm for {N}-Body Simulations, Supercomputing '94, IEEE Comp. 
Soc., Los Alamitos, 1994.
Beck, S.L. and D.H. Christensen, Ruture process of the February 4, 1965, Rat Islands 
earthquake, J. Geophys. Res., 96, 2205-2221, 1991.
Beck, S.L. and L.J. Ruff, The rupture process of the 1976 Mindanao earthquake, J. 
Geophys. Res., 90, 6773-6782, 1985.
Beck, S.L. and L.J. Ruff, The rupture process of the great 1979 Colombia earthquake: 
Evidence for the asperity model, J. Geophys. Res., 89, 9281-9291, 1984.
Beeler, N. and TE Tullis, Self healing slip pulses in dynamic rupture models due to 
velocity dependent strength, Bull. Seism. Soc, in press, 1996.
Beeler, N., TE  Tullis and JD Weeks, The roles of time and displacement in the evolution 
effect in rock friction, Geophys. Res. Lett., 21, 1987-1990, 1994.
Beeler, N.M., T.E.Tullis, M.L. Blanpied and J.D. Weeks, Frictional behavior of large 
displacement experimental faults, J. Geophys. Res., 101, 8697-8715, 1996.
Ben-Zion, Y. and J.R. Rice, Earthquake failure sequences along a cellular fault zone in a 
3D elastic solid containing asperity and non-asperity regions, J. Geophys. Res., 
98, 14109-14131, 1993.
Ben-Zion, Y. and J.R. Rice, Slip patterns and earthquake populations along different 
classes of faults in elastic solids, J. Geophys. Res., 100, 12959-12983, 1995.
Ben-Zion, Y., J.R. Rice, and R. Dmowska, Interaction of the San Andreas Fault creeping segment 
with adjacent great rupture zones and earthquake recurrence at Parkfield, J. Geophys. 
Res., 98, 2135-2144, 1993.
Ben-Zion, Y., Stress, slip and earthquakes in models of complex single-fault systems 
incorporating brittle and creep deformations, J. Geophys. Res., 101, 5677-5706, 
1996.
Beroza, G.C., Near Source modeling of the Loma Prieta earthquake; evidence for heterogeneous 
slip and implications for earthquake hazard,  Bull. Seismol. Soc. Amer., 81,  1603-1621, 
1991.
Beroza, G.C., Seismic source modeling, US National Rept. to IUGG 1991-1994, Amer 
Geophys. Un., 1995., pp. 299-308.
Berryman, J.G. and G.W. Milton, Normalization constraint for variational bounds on fluid 
permeability, J. Chem. Phys., 83, 754-760, 1985.
Bilham, R. and P. Bodin, Influence of fault zone connectivity on segment slip rate: Boundary 
element estimates for the San Francisco Bay area, Science, 258, 281-284, 1992.
Binder, K., Applications of the Monte Carlo Method in Statistical Mechanics, vol. 36 of 
Topics in Current Physics, Springer-Verlag, Berlin, 1987.
Binder, K., Monte Carlo Methods in Statistical Physics, vol. 7 of Topics in Current 
Physics, Springer-Verlag, Berlin, 1986.
Blanter, EM, MG Schnirman, JL Le Mouel and C.J. Allegre, Scaling laws in block 
dynamics and dynamic self-organized criticality, Phys. Earth. Planet. Int., in press, 
1996.
Board, J.A. , J W. Causey , J F. Leathrum,  A. Windemuth and K. Schulten, Accelerated 
molecular dynamics simulation with the parallel fast multipole algorithm, Chem. 
Phys. Let., 198, 89, 1992
Brown, S. and G. Gruner, Charge and spin density waves, Sci. Amer., 270, 50-57, 1994.
Brown, S.R, C.J. Scholz and J.B. Rundle, A simplified spring-block model of earthquakes, 
Geophys. Res. Lett., 18,  2, 215-218, 1991.
Brune, J.N., S. Brown, and P.A. Johnson, Rupture mechanism and interface separation in foam 
rubber models of earthquakes; a possible solution to the heat flow paradox and the paradox 
of large overthrusts,  Tectonophys., 218, 59-67, 1993.
Burridge, R. and C.A. Varga, The fundamental solution in dynamic poroelasticity, Geophys. J. 
Roy. astr. Soc., 58, 61-90, 1979.
Carlson, J.M and J.S. Langer, Properties of earthquakes generated by fault dynamics, 
Phys. Rev. Lett., 62, 2632-2635, 1989.
Carlson, J.M., J.S. Langer, B.E. Shaw and C. Tang, Intrinsic properties of a Burridge-Knopoff 
model of an earthquake fault, Phys. Rev. A,  44,  2, 884-897, 1991.
Carlson, J.M., Time intervals between characteristic earthquakes and correlations with smaller 
events; an analysis based on a mechanical model of a fault, J. Geophys. Res., 96,  4255-
4267, 1991.
Chen, G. and H. Spetzler, Complexities of rock fracture and rock friction from deformation of 
Westerly granite, PAGEOPH, 140, 95-119, 1993a.
Chen, G. and H. Spetzler, Topographic characteristics of laboratory induced shear fractures, 
PAGEOPH, 140, 123-135, 1993b.
Cleary, M.P., Fundamental solutions for a fluid-saturated porous solid, Int. J. Solids Str., 13, 
785-806, 1977.                                                                                                                                                                                                                                             
Coniglio, A. and W. Klein, J. Phys. A, 12, 2775, 1980.
Constantin, P., "Integral and Inertial Manifolds for Dissipative Partial Differential 
Equations", Springer-Verlag, 1989.
Cornell, C.A., S.-C. Wu, S.R. Winterstein, J.H. Dieterich, and R.W. Simpson, Seismic hazard 
induced by mechanically interactive fault segments, Bull. Seism. Soc.Am., 83, 436-449, 
1993
Crutchfield, JP and BS McNamara, Equations of motion from a data series, Complex 
Systems, 1, 417-452, 1987.
Crutchfield, JP and JE Hanson, Turbulent Pattern Bases for Cellular Automata, Physica D, 
69, 279 - 301, 1993.
Crutchfield, JP and K Young, Inferring statistical complexity, Phys. Rev. Lett., 63, 105-
108, 1989.
Crutchfield, JP and K. Kaneko, Are attractors relevant to turbulence?, Phys. Rev. Lett., 
60, 2715, 1988.
Crutchfield, JP, Subbasins, portals and mazes: Transients in high dimensions, J. Nucl. 
Phys. B., 5A, 287, 1988.
Crutchfield, JP, The calculi of emergence: Computation, dynamics, and induction, Physica 
D, 75, 11-54, 1994.
de Sousa Vieria, M., G.L. Vasconcelos and S.R. Nagel, Dynamics of spring-block models: 
Tuning to criticality, Phys. Rev. E, 47, 4, R2221-2224, 1993.
Dieterich, J.H. and M.F. Linker, Fault stability under conditions of variable normal stress, 
Geophys. Res. Lett., 19, 16, 1691-1694, 1992
Dieterich, J.H., A constitutive law for rate of earthquake production and its application to 
earthquake clustering, J. Geophys. Res., 99, 2601-2618, 1994.
Dieterich, J.H., Constitutive properties of rock with simulated gouge, In Mechanical Behavior of 
Crustal Rocks.  AGU Geophys. Monograph 24, Washington, DC, American Geophysical 
Union, 103-120, 1981.
Dieterich, J.H., Time dependent friction and the mechanics of stick slip, Pure Appl. Geophys., 
116, 790-806, 1978.
Dieterich, J.H., Time dependent friction in rocks, J. Geophys. Res., 77, 3690-3697, 1972
Ding, E.J. and Y.N. Lu, Analytical treatment for a spring-block model, Phys. Rev. Lett., 70, 
3627-3630, 1993.
Ding, H.-Q. and N. Karasawa and W.A. Goddard, Atomic Level Simulations of on a 
Million Particles:  The Cell Multipole Method for Coulomb and London 
Interactions, J. of Chemical Physics, 97, 4309-4315, 1992
Dmowska, R. and L. Lovison, Influence of asperities along subduction interfaces on the 
stressing and seismicity of adjacent areas, Tectonophysics, 211, 23-41, 1992.
Dowla, F.U., T.F. Hauk, J.B. Rundle and J.J.Hopfield, Earthquake forecasting using neural 
networks (abstr.), Seism. Res. Lett., 63,  21, 1992.
Eberhart-Phillips, D. and A.J. Michael, Three-dimensional velocity structure, seismicity, and fault 
structure in the Parkfield region, central California, J. Geophys. Res., 98, B9, 15737-
15758, 1993.
Engdahl, E.R., S. Billington, and C. Kisslinger, Teleseismically recorded seismicity 
before and after the May 7, 1986 Andreanof Islands, Alaska, earthquake, J. 
Geophys. Res., 94, 15481-15498, 1989.
Feynman, R.P. and A.R. Hibbs, Quantum Mechanics and Path Integrals,  McGraw-Hill, 
New York, 1965.
Foley, J., van Dam, A., Feiner, S., and Hughes, J., Computer Graphics, Principles and 
Practice,  550, (Reading, MA Addison-Wesley), Reading, MA. 1990.
Fortuin, C.M. and P.W. Kasteleyn, Physica, (Utrecht), 57, 536, 1972.
Fox, G.C., W. Furmanski , Snap, Crackle, WebWindows! , 
http://www.npac.syr.edu/techreports/html/0750/abs-0758.html , 1996
Fox, GC, R.D. Williams, and P.C. Messina, Parallel Computing Works!, Morgan 
Kauffman, San Francisco, 1994.
G.C. Fox, GC, W. Furmanski, M. Chen, C. Rebbi, J. Cowie, WebWork: Integrated 
Programming Environment Tools for National and Grand Challenges, 
http://www.npac.syr.edu/techreports/html/0700/abs-0715.html, 1995
Gabrielov, A., W. Newman and L. Knopoff, Lattice models of failure: Sensitivity to the 
local dynamics, Phys. Rev. E, 50, 188-197, 1994.
Goil, S. and S. Ranka, Parallelization of heirarchically structured applications on 
distributed memory machines, NPAC Tech Rept. SCCE-688, 
http:/www.npac.syr.edu/techreports/index.html, 1995.
Goil, S., Primatives for problems using heirarchical algorithms on distributed memory 
machines, NPAC Tech Rept. SCCS-687, 
http:/www.npac.syr.edu/techreports/index.html, 1994.
Grama, A. V.  Kumar and Ahmed Sameh, Scalable Parallel Formulations of the
Grant, L. and A. Donnellan, 1855 and 1991 surveys of the San Andreas fault: Implications for 
fault mechanics, Bull. Seism. Soc. Am., 84,  241-246, 1994.
Grant, L. and K. Sieh, Paleoseismic evidence of clustered earthquakes on the San Andreas fault in 
the Carrizo Plain, J. Geophys. Res., 99, 6819-6842, 1994.
Grassberger, P., Phys. Rev. E., 49, 2436, 1994.
Greengard, L. and Rokhlin, V., J. Comp. Phys, 73, 325-348, 1987.
Gross, S., J.B. Rundle and W. Klein, Simulations of the Traveling Density Wave model for 
earthquakes and comparison to observations, preprint, 1996
Gross, S., On the scaling of average slip with rupture dimensions of earthquakes, pp.157-166 in 
JB Rundle, D. Turcotte and W. Klein, eds., Reduction and Predictability of Natural 
Disasters,  vol XXV in the Santa Fe Inst. series on the Sciences of Complexity, Addison 
Wesley, 1996.
Haberman, R.E., Consistency of teleseismic reporting since 1963, Bull. Seism. Soc. Am., 
72, 93-111, 1982.
Hanks, T.C., Small earthquakes, tectonic forces, Science, 256,  1430-1432, 1992.
Hanson, J.E. and J.P. Crutchfield, The attractor-basin portrait of a cellular automaton, J. Stat. 
Phys., 66, 1415-1462, 1992.
Harris, R.A., and R.W. Simpson, Changes in static stress on Southern California faults after the 
1992 Landers earthquake, Nature (London), 360,  251-254, 1992.
Heaton, T., Evidence for and implications of self-healing pulses of slip in earthquake 
rupture, Phys. Earth and Planet. Int., 64, 1-20, 1990.
Heaton, T.H., Evidence for and implications for self-healing pulses of slip in earthquake rupture, 
Phys. Earth Planet. Int., 64, 1-20, 1990.
Heimpel, M., Earthquake size-frequency relations from an analytical stochastic rupture model, J. 
Geophys. Res., in press, 1996.
Hernquist, L. and N. Katz,{TREESPH} - A Unification of {SPH} with the Hierarchical 
Tree Method, Astrophysical J. Supp., 70, 419, 1989
Herz, A.V.M. and J.J. Hopfield, Rapid phase locking in coupled integrate-and-fire 
neurons, preprint, 1995.
Hill, D.P., and others, Seismicity remotely triggered by the magnitude 7.3 Landers, California, 
earthquake, Science , 260,  1617-1623, 1993.
Hill, R. A self consistent mechanics of composite materials, J. Mech.. Phys. Sol., 13, 
213-222, 1965.
Holt, C. and J P Singh, Hierarchical {N}-body methods on shared address space 
multiprocessors , SIAM, Seventh SIAM Conference on Parallel Processing for 
Scientific Computing, Philadelphia, 1995.
Hopfield, J.J., Neurons, dynamics and computation, Physics Today, 47, 40-46, 1994.
Hu, Y.  and S. L. Johnsson, A Data Parallel Implementation of Hierarchical {N}-body 
Methods, Harvard University, TR-26-94, 1994.
Huang, J. and D.L. Turcotte, Chaotic seismic faulting with a mass-spring model and velocity 
weakening friction, PAGEOPH, 138, 570-589, 1992.
Huang, J., G. Narkounskaia and D.L. Turcotte, A cellular-automata, slider-block model for 
earthquakes, II. Demonstration of self-organized criticality for a 2-D system, Geophys. J. 
Int., 111, 2, 259-269, 1992.
Ihmle, P.F., P. Harabaglia and T.J. Jordan, Teleseismic detection of a slow precursor to the great 
1989 Macquarie Ridge earthquake, Science, 261, 5118, 177-183, 1993.
Ivins, E.R. and C.G. Sammis, On lateral viscosity contrast in the mantle and the rheology of low 
frequency geodynamics, Geophys. J. Int, in press, 1995b.
Ivins, E.R., Composite Constitutive Theory for a Solid Earth of Strong Lateral Viscosity Contrast, 
Ph.D. Dissertation, Univ. Southern Calif., 1995a.
Jackson, D.D. and others, Seismic hazards in southern California: Probable earthquakes, 1994 to 
2024, Bull. Seism. Soc. Am., 85, 379-439, 1995. 
Jaume, S.C., and L.R. Sykes, changes in state of stress on the southern San Andreas Fault 
resulting from the California earthquake sequence of April to June 1992,  Science, 258, 
1325-1328, 1992.
Johnson, P.A., M. Antolik, W.A. Peppin, W. Foxall, and T.V. McEvilly, Structure and rupture 
processes of the fault zone at Parkfield from the spectra of microearthquakes, Seism. Res. 
Lett., 63, 75, 1992.
Jordan, T.H., Far-field detection of slow precursors to fast seismic ruptures, Geophys. Res. Lett., 
18, 11,2019-2022, 1991.
Kagan, Y.Y.,  and D.D. Jackson, Seismic gap hypothesis; ten years after, J. Geophys. Res., 96,  
21419-21431, 1991.
Kalia, R.K., S. de Leeuw,  A. Nakano and P. Vashishta, Molecular dynamice simulations 
of Coulombic systems on  distributed-memory {MIMD} machines}, Comput. 
Phys. Commun., 74, 316-326, 1993.
Kanamori, H, Global seismicity, pp. 596-608 in Earthquakes: Observation, Theory and 
Interpretation, Course LXXXV, Proc. Intl. School of Physics "Enrico Fermi", the 
Italian Physical Society, North Holland, Amsterdam, 1983.
Kanamori, H. and D.L. Anderson, Theoretical basis for some empirical relations in 
seismology, Bull. Seism. Soc. Am., 65, 1073-1095, 1975.
Kanamori, H. and M. Kikuchi, The 1992 Nicaragua earthquake: A slow tsunami earthquake 
associated with subducted sediments, Nature, 361, 6414, 714-716, 1993.
Kanamori, H., E. Hauksson, A slow earthquake in the Santa Maria Basin, California, Bull. 
Seismol. Soc. Amer., 85, 2087-2096, 1992.
Kanamori, H., Mechanics of Earthquakes, Ann. Rev. Earth Planet. Sci., 21, 1993.
Karageorgi, E., R. Clymer and T.V. McEvilly, Seismological studies at Parkfield, II. Search for 
temporal variations in wave propagation using Vibroseis, Bull. Seism. Soc. Amer., 82, 3, 
1388-1415, 1992.
Kikuchi, M., H. Kanamori and K. Satake, Source complexity of the 1988 Armenian earthquake: 
Evidence for a slow after-slip event, J. Geophys. Res., 98, B9, 15797-15808, 1993.
King, G. C. P., R. S. Stein, and J. Lin, Static stress changes and the triggering of 
earthquakes, Bull. Seismol. Soc. Amer., 84, 935-953, 1994.
King, G.C.P. and J. Nabelek, Role of fault bends in the initiation and termination of 
earthquake ruptureScience, 228, 984-987, 1985.
Kisslinger, C., The stretched exponential function as an alternative model for aftershock decay 
rate, J. Geophys. Res., 98, 1913-1921, 1993.
Klein, W. and F. Leyvraz, Crystalline nucleation in deeply quenched systems, Phys. Rev. Lett., 
57,  2845-2848, 1986.
Knopoff, L., J.A. Landoni, and M.S. Abinante, Dynamical model of an earthquake fault with 
localization, Phys. Rev. A467, 7445-7449, 1992.
Kostrov, BV, Self similar problems of propagation of shear cracks, J. Appl. Mech., 28, 
1077-1087, 1964.
Lake, G., N. Katz, T Quinn and J Stadel, Cosmological {N}-body Simulation,  SIAM, 
Seventh SIAM Conference on Parallel Processing for  Scientific Computing, 
Philadelphia, 1995
Lay, T., H. Kanamori, and L. Ruff, The asperity model and the nature of large subduction 
zone earthquakes, Earthquake Prediction Research, 1, 3-71, 1982.
Lees, J.M., and C. Nicholson, Three-dimensional tomography of the 1992 southern California 
earthquake sequence: constraints on dynamic earthquake rupture?, Geology, 21, 387-390, 
1993.
Li, V.C., Mechanics of shear rupture applied to earthquake zones, in Fracture Mechanics 
of Rock, ed. B.K. Atkinson, Academic Press, London, 1987, pp. 351-428.
Linde, A.T., M.T. Gladwin, and J.J.S. Johnston, The Loma Prieta earthquake, 1989 and  earth 
strain tidal amplitudes; an unsuccessful search for associated changes, Geophys. Res. 
Letts., 19, 317-320, 1992.
Lomnitz-Adler, J., Automaton models of seismic fracture: constraints imposed by the magnitude-
frequency relation, J. Geophys. Res., 98, 17745-17756, 1993.
Ma, S.-K., Modern Theory of Critical Phenomena, Benjamin-Cummings, Reading MA, 
1976.
Ma, S.-K., Statistical Mechanics, World Scientific, Philadelphia, PA, 1985.
Marone, C.J., C.H. Scholz, and R. Bilham, On the mechanics of earthquake afterslip, J. 
Geophys. Res., 96, 8441-8452, 1991.
Maruyama, T., Fluid pressure responses of a water-saturated porous elastic multilayered half space 
to water pumping and atmospheric loading, J. Phys. Earth, 42, 331-375, 1994.
McNally, K.C., Plate subduction and prediction of earthquakes along the Middle America 
Trench, in Earthquake Prediction, An International Review, AGU monograph 4, 
American Geophysical Union, Washington, DC, 1981. 
Mendoza, C., Hartzell, S. and T. Monfret, Wide-band analysis of the 3 March, 1985 Central Chile 
earthquake: Overall source process and rupture history, Bull. Seism. Soc. Am., 84, 269-
283, 1994.
Monette, L. and W. Klein, Spinodal nucleation as a coalescence process, Phys. Rev. Lett., 
68, 2336-2339, 1992.
Mora, P. and D. Place, Simulation of the frictional stick slip instabilty, Pure Appl. Geophys., 
143, 61-87, 1994.
Mouritsen, O.G., in Computer  Studies of Phase Transitions and Critical Phenomena, 
Springer-Verlag, Berlin, 1984.
Nadeau, R., M. Antolik, P.A. Johnson, W. Foxall and T.V. McEvilly, Seismological studies at 
Parkfield III: Microearthquake clusters of fault zone dynamics, Bull. Seism. Soc. Am., 84, 
247-263, 1994.
Nakanishi, H., Cellular automaton model of earthquakes with deterministic dynamics, 
Phys. Rev. A, 43, 6613-6621, 1990.
Narkounskaia, G., and D.L. Turcotte, A cellular-automata, slider-block model for earthquakes; I, 
Demonstration of chaotic behaviour for a low order system, Geophys. J. Internat., 11, 
250-258, 1992.
Narkounskaia, G., J. Huang and D.L. Turcotte, Chaotic and self-organized critical behavior of a 
generalized slider block model, J. Stat. Phys., 67, 1151-1183, 1992.
Narkounskaia, G., J. Huang and D.L. Turcotte, Chaotic and self-organized critical 
behavior of a generalized slider block model, J. Stat. Phys., 67, 1151-1183, 1992.
Newman, W.I. and A.M. Gabrielov, Failure of hierarchical distributions of fiber bundles, Int. J. 
Fracture, 50, 1-14, 1991.
Newman, W.I., AM Gabrielov, TA Durand, SL Phoenix and D.L. Turcotte, An exact 
renormalization model for earthquakes and material failure. Statics and dynamamics, 
Physica, D77, 200-216, 1994.
Newman, W.I., D.L. Turcotte and A. Gabrielov, A hierarchical  model for precursory seismic 
activation, pp. 243-260, in J.B. Rundle, D.L. Turcotte and W. Klein, Reduction and 
Predictability of Natural Disasters, vol. XXV in the Santa Fe Institute series on complexity, 
Addison Wesley, 1996. 
Nishenko, S.P., L.R. Sykes, D.D. Jackson, and Y.Y. Kagan,  Seismic gap hypothesis; ten years 
after [discussion and reply], J. Geophys. Res., 908, 9909-9920, 1993.
Okada, Y., Internal deformation due to shear and tensile faults in a half-space, Bull. Seism. Soc. 
Am., 82, 1018-1040, 1992.
Okada, Y., Surface deformation duei to shear and tensile faults in a half-space, Bull. Seism. Soc. 
Am., 75, 1135-1154, 1985.
Olami, Z., H.J.S. Feder, and K. Christensen, Self-organized criticality in a continuous, 
nonconservative cellular automaton modeling earthquakes, Phys. Rev. Lett., 68,  1244-
1247, 1992 (Comment by W. Klein and J. Rundle, Phys. Rev. Lett., 71, 1288, 1993, 
with reply by Christensen, Phys. Rev. Lett., 71, 1289, 1993)
Pacheco, J.F., C.H. Scholz, and L.R. Sykes, Changes in frequency-size relationship from small 
to large earthquakes, Nature (London), 355, I 71-73, 1992.
Pepke, S.L., J.M. Carlson and B.E. Shaw, Prediction of large events on a dynamical model of a 
fault, J. Geophys. Res., 99, 6769-6788, 1994. 
Perrin, G.O., J.R. Rice and G. Zheng, Self-healing slip pulse on a frictional surface, J. Mech. 
Phys. Sol, 43, 1461-1495, 1995.
Place, D and P. Mora, Large scale simulations of the physics of earthquakes, Trans. Am. 
Geophys. Un. EOS, v. 76 Suppl., F424, 1995.
Rabinowicz, E., Friction and Wear of Materials, John Wiley, New York, 1965.
Reid, HF, in A.C. Lawson, The California Earthquake of April 18, 1906, Report of the  
State Earthquake Investigation Commission, Vol I (Carnegie Institution of 
Washington, Washington, DC, 1908).
Reynders, J. V. W. , D. W. Forslund, P. J. Hinker, M. Tholburn, D. G.  Kilman, and W. 
F. Humphrey, Object-Oriented Particle Simulation on Parallel Computers, OON-
SKI '94 Proceedings.
Reynders, J. V. W., D. W. Forslund, P. J. Hinker, M. Tholburn, D. G.  Kilman, and W. 
F. Humphrey, OOPS: An Object-Oriented Particle Simulation Class Library for 
Distributed Architectures, Computer Physics Communications, 87, 212, 1995.
Reynders, J. V. W.,  J. C. Cummings, P. J. Hinker, M. Tholburn, M. Srikant, S. 
Banerjee, S. Atlas, K. Kaehy, POOMA: a FrameWork for Scientific Applications`` 
[to appear in `` Parallel Computing in C++, MIT Press, 1996.
Reynders, J. V. W., P.J. Hinker, J.C. Cummings, et al., POOMA: A Framework for 
Scientific Simulations on Parallel Architectures, Proceedings of the 1995 
Computational Sciences Workshop, University of New Mexico (in press).
Rice, J.R. and A.L. Ruina, Stability of steady frictional slipping, J. Appl. Mech., 105, 343-349, 
1983.
Rice, J.R. and J.C. Gu, Earthquake aftereffects and triggered seismic phenomena, Pure Appl. 
Geophys., 121, 443-475, 1983.
Rice, J.R. and M.Cleary, Some basic stress diffusion solutions for fluid saturated elastic porous 
media with compressible constituents, Rev. Geophys. Space Phys., 14, 227-241, 1976.
Rice, J.R. and S. T. Tse, Dynamic motion of a single degree of freedom system following a rate 
and state dependent friction law, J. Geophys. Res, 91, 521-530, 1986.
Rice, J.R. and W.D. Stuart,  Stressing in and near a strongly coupled subduction zone during the 
earthquake cycle,(abstract), EOS Trans. Am Geophys. Un., 70, (43), 1063, 1989.
Rice, J.R. and Y. Ben-Zion, Slip complexity in earthquake fault models, Proc. Nat. Acad. 
Sci.USA, in press, 1996.
Rice, J.R., Spatio-temporal complexity of slip on a fault, J. Geophys. Res., 98, 9885-907, 1993.
Rice, J.R., The mechanics of earthquake rupture, in Physics of the Earth's Interior, ed. A.M. 
Dziewonski and E. Boschi, pp. 555-649, North-Holland, New York, 1980.
Richter, C.F., Elementary Seismology, W.H. Freeman, San Francisco, 1958.
Rokhlin, V., Rapid Solution of Integral Equations of Classical Potential Theory, J. Comp. 
Phys., 60, 187--207, 1985.
Romanowicz, B. and J. B. Rundle, On scaling relations for large earthquakes, Bull.Seism. Soc. 
Am., 83, 1294-1297, 1993.
Romanowicz, B., Strike-slip earthquakes on quasi-vertical transcurrent faults: Inferences for 
general scaling relations, Geophys. Res. Lett., 19, 481-484, 1992.
Rudnicki, J.W., J. Yin, and E.A. Roeloffs, Analysis of water level changes induced by fault creep 
at Parkfield, California, J. Geophys. Res., 98, 8143-8152, 1993.
Ruff, L.J., Asperity distributions and large earthquake occurrence in subduction zones, 
Tectonophys., 211, 61-83, 1992.
Ruff, L.J., Fault asperities inferred from seismic body waves, pp. 251-276, in 
Earthquakes: Observation, Theory and Interpretation, Course LXXXV, Proc. Intl. 
School of Physics "Enrico Fermi", the Italian Physical Society, North Holland, 
Amsterdam, 1983.
Rundle, J.B. and D.D. Jackson, Numerical simulation of earthquake sequences, Bull. 
Seism. Soc. Am., 67, 1363 - 1378, 1977.
Rundle, J.B. and D.L. Turcotte, New directions in theretical studies of tectonic 
deformation: A survey of recent progress, in Contributions of Space Geodesy to 
Geodynamics: Crustal Dynamics, ed. D.E. Smith and D.L. Turcotte, Geodynamics 
Ser. Vol. 23, AGU, Washington, DC, 1993.
Rundle, J.B. and S.R. Brown, Origin of rate dependence in frictional sliding, J. Stat. 
Phys., 65, 403-412, 1991.
Rundle, J.B. and W. Klein, Implications of a "toy" slider block model for the rupture mechanics 
of earthquakes, J. Geophys. Res., submitted, 1994 (also Trans. Am. Geophys. Un. EOS 
Fall meeting suppl. abstr.,  405, 1993)
Rundle, J.B. and W. Klein, New ideas about the physics of earthquakes, AGU 
Quadrennial report to the IUGG and Rev. Geophys. Space Phys., in press, 1994.
Rundle, J.B. and W. Klein, Nonclassical nucleation and growth of cohesive tensile cracks, 
Phys. Rev. Lett., 63, 171-174, 1989.
Rundle, J.B. and W. Klein, Nonlinear dynamical models for earthquakes and frictional 
sliding: An overview, Proc. 33rd Symposium Rock Mech., A.A. Balkema, 
Rotterdam, 1992.
Rundle, J.B. and W. Klein, Scaling and critical phenomena in a class of slider block 
cellular automaton models for earthquakes, J. Stat. Phys., 72,  405-412, 1993.
Rundle, J.B., A physical model for earthquakes, 2, Application to southern California, J. 
Geophys. Res., 93, 6255-6274, 1988.
Rundle, J.B., A physical model for earthquakes: 1, Fluctuations and interactions, J. 
Geophys. Res., 93, 6237-6254, 1988.
Rundle, J.B., A physical model for earthquakes: 3, Thermodynamical approach and its 
relation to nonclassical theories of nucleation, J. Geophys. Res., 94, 2839-2855, 
1989. 
Rundle, J.B., and W. Klein, Scaling and critical phenomena in a cellular automaton slider-block 
model for earthquakes, J. Statist. Phys., 72, 405-412, 1993.
Rundle,  J.B., Derivation of the complete Gutenberg-Richter magnitude-frequency relation using 
the principle of scale invariance, J. Geophys. Res., 94, 12337-12342, 1989.
Rundle, J.B., J. Ross, G. Narkounskaia, and W. Klein, Earthquakes, self-organization, 
and critical phenomena, J. Stat. Phys., submitted, 1995a.
Rundle, J.B., Magnitude frequency relations for earthquakes using a statistical mechanical 
approach, J. Geophys. Res., 98, 21943-21949, 1993.
Rundle, J.B., Some solutions for static and pseudo-static deformation in layered 
nonisothermal media, J. Phys. Earth, 30, 421-440, 1982b.
Rundle, J.B., Viscoelastic crustal deformation from a rectangular fault in a layered earth, J. 
Geophys. Res., 87, 7787-7796, 1982a.
Rundle, J.B., W. Klein and S. Gross, Dynamics of a Traveling Density Wave model for 
earthquakes, Phys. Rev. Lett., 76, 4285-4288, 1996.
Rundle, J.B., W. Klein and S. Gross, Dynamics of a traveling density wave model for 
earthquakes, Phys. Rev. Lett., 76, 4285-4288, 1996.
Rundle, J.B., W. Klein, S. Gross and D.L. Turcotte, Boltzmann fluctuations in numerical 
simulations of nonequilibrium lattice threshold systems, Phys. Rev. Lett., 75, 
1658-1661,1995b.
Sahimi, M., M.C. Robertson and C.G. Sammis, Fractal distribution of earthquake hypocenters 
and its relation to fault patterns and percolation, Phys. Rev. Lett., 70, 2186-2189, 1993.
Salmon, J.K. and M. S. Warren. J. Comp. Phys., Skeletons from the treecode closet., 
111, 136-155, 1994.
Salmon, J.K. and Warren, M.S. , J. Comp. Phys., 111, 136-155, 1994
Samet, H. (1990),  Design and Analysis of Spatial Data Structures, (Reading, 
MA\thinspace: Addison-Wesley), 1990.
Sammis, C.G., D.Sornette and H. Saleur, Complexity and earthquake forecasting,  pp. 
143-156, in JB Rundle, DL Turcotte, W. Klein, eds., Reduction and Predictability 
of Natural Disasters, vol. XXV in Santa Fe Institute series in Sciences of 
Complexity, Addison Wesley, 1996.
Sanders, C.O., Interaction of the San Jacinto and San Andreas fault zones, southern California: 
Triggered earthquake migration and coupled recurrence intervals, Science, 260, 973-975, 
1993.
Savage, J.C., Criticism of some forecasts of the National Earthquake Prediction Evaluation 
Council, Bull. Seismol. Soc. Amer., 81,  862-881, 1991.
Savage, J.C., The Parkfield prediction fallacy, Bull. Seismol. Soc. Amer., 83, 1-6, 1993.
Savage, J.C., W.H. Prescott, and G. Gu, Strain accumulation in southern California, 1973-1984, 
J. Geophys. Res., 91, 7455-7473, 1986.
SCEC (Southern California Earthquake Center, Working group on California earthquake 
probabilities), Seismic hazards in southern California: Probable earthquakes, 1994 
to 2024, Bull. Seism. Soc. Am., 85, 379-439, 1995.
Scholz, C.H., A reappraisal of large earthquake scaling, Bull. Seism. Soc. Am., 84, 215-218, 
1994.
Scholz, C.H., The Mechanics of Earthquakes and Faulting, Cambridge University Press, 
Cambridge, U.K., 1990.
Schulman, L.S., Techniques and Applications of Path Integration, John Wiley, New York, 
1981.
Schwartz, D.P. and K.J. Coppersmith, Fault behavior and characteristic earthquakes: 
Examples from the Wasatch and San Andreas faults, J. Geophys. Res., 89, 5681-
5698, 1984.
Schwartz, S.Y. and L.J. Ruff, Asperity distribution and earthquake occurrence in the 
southern Kurile Islands arc, Phys. Earth Planet. Int., 49, 54-77, 1987.
Schwartz, S.Y. and L.J. Ruff, The 1968 Tokachi-Oki and the 1969 Kurile Islands 
earthquakes: Variability in the rupture process, J. Geophys. Res., 90, 8613-8626, 
1985.
Schwartz, S.Y., J.W. Dewey and T. Lay, Influence of fault plane heterogeneity on the 
seismic behavior in the southern Kurile Islands arc, J. Geophys. Res., 94, 5637-
5649, 1989.
Senatorski, P., Fault Zone Dynamics Evolution Patterns, Ph.D. Dissertation, Geophysical 
Institute of the Polish Academy of Sciences (Polska Akademia Nauk, Instytut 
Geofizyki), Warsaw, 1993.
Shaw, B.E., J.M. Carlson, and J.S. Langer, Patterns of seismic activity preceding large 
earthquakes, J. Geophys. Res., 97,  479-488, 1992.
Shkoller, S. and Minster, J.B., "Chaos in a Burridge-Knopoff Model with state and rate 
dependent friction law", submitted  to Nonlinear Processes in Geophysics, 1996.
Sibson, R.H., Implications of fault-valve behaviour for rupture nucleation and recurrence, 
Tectonophys. 211,  283-293, 1992.
Sieh, K.E. and others, Near field investigations of the Landers earthquake sequence, April to July, 
1992, Science, 260, 171-176, 1993.
Sieh, K.E. and R.H. Jahns, Holocene activity of the San Andreas at Wallace Creek, California, 
Geol. Soc. Bull., 95, 883-896, 1984.
Smale, S., Differentiable dynamical systems, Bull. Amer. Math. Soc., 73, 747-817, 
1967.
Smalley, R.F., D.L. Turcotte and S.A. Solla, A renormalization group approach to the 
stick slip behavior of faults, J. Geophys. Res., 90, 1884-1900, 1985.
Smalley, R.F., J.L. Chatelain, D.L. Turcotte, and R. Prevot, A fractal approach to the 
clustering of earthquakes: Applications to the seismicity of the New Hebrides, Bull. 
Seism. Soc. Am., 77, 1368-1381, 1987.
Sornette, D. and J. Virieux, Linking short-timescale deformation to long-timescale tectonics, 
Nature, 357,  401-404, 1992.
Sornette, D., A. Johansen, A. Arneodo, JF Muzy and H. Saleur, Complex fractal dimensions 
describe the hierarchical structure of Diffusion-Limited-Aggregation clusters, Phys. Rev. 
Lett. , 76, 251-254, 1996.
Stauffer, D. and A. Aharony, Introduction to Percolation Theory, Taylor and Francis, 
London, 1992.
Stein, R. S., A. A. Barka, and J. H. Dieterich, Progressive earthquake failure on the North 
Anatolian fault since 1939 by stress triggering, Geophys. J. Int., submitted, 1996.
Stein, R. S., G. C. P. King, and J. Lin, Stress triggering of the 1994 M=6.7 Northridge, 
California, earthquake by its predecessors, Science, 265, 1432-1435, 1994. 
Stein, R.S., G.C.P. King and J lin, Changes in failure stress on the southern San Andreas fault 
system caused by the 1992 magnitude = 7.4 Landers earthquake, Science, 258,  1328-
1332, 1992.
Steketee, J.A., On Volterra's dislocations in a semi-infinite elastic medium, Can. J. Phys., 
36, 192-205, 1958.
Stuart, W., Strain softening prior to two dimensional strike slip earthquakes, J. Geophys. Res., 
84, 2153-2160, 1979.
Stuart, W.D. and T.E. Tullis, Fault model for preseismic deformation at Parkfield, 
California, J. Geophys. Res., 100, 24079-24101, 1995.
Stuart, W.D., Forecast model for recurring large and great earthquakes in  southern 
California, J. Geophys. Res., 91, 13771-13786, 1986.
Stuart, W.D., Instability model for recurring large and great earthquakes in southern California, 
Pure Appl. Geophys., 122, 793-811, 1985.
Taylor, M.J., G. Zheng, J.R. Rice, W.D. Stuart and R. Dmowska, Cyclic stressing and 
seismicity at strongly coupled subduction zones, J. Geophys. Res., 101, 8363-
8381, 1996.
Tse, S. and J.R. Rice, Crustal earthquake instability in relation to the depth variation of 
frictional slip properties, J. Geophys. Res., 91, 9452-9472, 1986.
Turcotte, D.L., Earthquake prediction, Ann. Rev. Earth Planet. Sci., 19,  181-194,  263-281, 
1991.
Turcotte, DL, Fractals and Chaos in Geology and Geophysics, Cambridge, 1992.
Vasconcelos, G.L., M. de Sousa Vieira, and S.R. Nagel, Implications of a conservation law for 
the distribution of earthquake sizes, Phys. Rev. A., 44, R7869-R7872, 1991.
Wald, D.J., D.V. Helmberger, and T.H. Heaton, Rupture model of the 1989 Loma Prieta 
earthquake from the inversion of strong-motion and broadband teleseismic data,  Bull. 
Seismol. Soc. Amer., 81, 1540-1572, 1991.
Wallace, T.C., A. Velasco, J. Zhang, and T. Lay, A broadband seismological investigation of the 
1989 Loma Prieta, California, earthquake; evidence for deep slow slip?, Bull. Seismol. 
Soc. Amer., 81, 1622-1646, 1991.
Ward, S,N, and S.D.B. Goes, How regularly do earthquakes recur? A synthetic seismicity model 
for the San Andreas fault, Geophys. Res. Lett., 20, 2131-2134, 1993.
Ward, S.N. and S. Goes, How regularly do earthquakes recur? A synthetic  seismicity 
model for the San Andreas fault, Geophys. Res. Lett., 20, 2131-2134, 1993.
Ward, S.N., An application of synthetic seismicity in earthquake statistics: the Middle America 
trench, J. Geophys. Res., 97, 6675-6682, 1992.
Ward, S.N., Synthetic earthquake models for long-term prediction, Geotimes, 37, 18-20, 1992.
Warren, M.S. and J. K. Salmon. Computer Physics Communications, A portable parallel 
particle program, 87, 266-290, 1995.
Warren, M.S. and J.K. Salmon, A Parallel, Portable and Versatile Treecode, Proc. 
Seventh SIAM Conference on Parallel Processing for Scientific Computing, San 
Francisco, CA, 15--17 Feb., 1995: 319--324.
Warren, M.S. and Salmon, J.K., Computer Physics Communications, 87, 266-290, 
1995.
Warren, MS,  P J. Quinn, J.K. Salmon and W H. Zurek, Dark Halos Formed via 
Dissipationless Collapse: 1. Shapes and Alignment of Angular Momentum, 
Astrophysical J., 399, 405--425, 1992
Weatherwise, J. Am Met. Sci., June/July, 1995.
Wuethrich, B., California quake assessed, Trans. Am Geophys. Un. EOS., 75, 41, 1994.
Yeomans, J.M., Statistical Mechanics of Phase Transitions, Clarendon Press, Oxford, 
1992.
Zheng, G., Y. Ben-Zion, J.R. Rice and J. Morrissey, A new method for calculating 
elastodynamic response over long stressing histories containing rapid rupture 
episodes, Trans. Am. Geophys. Un. EOS (Suppl.), 76, F405, 1995.

F. 	COMPUTATIONAL FACILITIES

	Simulations will be carried out in a variety of computational environments.  These 
are described below.

Advanced Computing Laboratory:

	The computing hardware in the LANL Advanced Computing Laboratory is about to 
begin a multi-year, phased upgrade. We will focus on clustered SMP technology.  In the 
broadest terms, one might describe an initial SMP-cluster arriving in Fall, 1996, of about 
the same capacity as our 1024 node TMC CM-5 but with greater efficiencies; a 5-fold 
technology refresh in Fall, 1997, and a final configuration in Winter, 1998.

October 1996		October 1997		October 1998
----------------		----------------		----------------
100 Gflops		600 Gflops		1000 Gflops
30 Gbytes memory	150 Gbytes memory	250 Gbytes memory
600 Gbytes disk	3000 Gbytes disk	5000 Gbytes disk
multiple HIPPIs	multiple HIPPIs	multiple HIPPI 6400s

	The visualization environment for the Grand Challenges will consist of state of the 
art high-end SGI servers for hardware assisted visualization. At the start of the program 
(Oct 1, 1996), our graphics server facilities will consist of a dual headed Onyx with 2 RE-2 
graphics engines, 8 CPUS, 24GB/disk and a 2 processor R10K with an Infinite Reality 
graphics engine. We expect to phase out the Onyx and bring on-line a minimum of 3 
Infinite Reality engines over time. These will be connected via striped HIPPI with the 
SMP-cluster forming the main compute capabilities for the ACL. In addition to the high-
end graphics hardware, the ACL provides video laser-disk assisted animation capabilities 
for generation of video tapes (SVHS/VHS/Hi-8/8). We encourage participants to leverage 
the visualization environment in the ACL with mid-range SGI desktops such as the 
Maximum Impact Indigo^2 with a full complement of memory.

Northeast Parallel Architectures Center (NPAC):

	The Northeast Parallel Architectures Center (NPAC) is a research and  development 
center focusing on high performance computing and communications. Established at 
Syracuse University in 1987, the center is directed by Geoffrey C. Fox, a pioneer in the 
development and application of parallel computers, who assumed NPAC leadership in 
1990.  NPAC has a strong emphasis on computational science applications where one 
seeks to obtain "real solutions to real problems." NPAC's InfoMall technology transfer 
program puts high-performance computing and communications (HPCC) to work in 
industry. Other major projects include research and development in the areas of parallel 
languages and parallel compilers, including work on High Performance Fortran (HPF) -- a 
standardized parallel version of Fortran; distributed computing; parallel database 
technology; integration of relational databases with the NII; parallel algorithms; distributed 
and cluster computing; and networked digital multimedia applications providing 
Information, Video, Imagery, and Simulation on demand.

NPAC's role in this collaboration will focus on appropriate use of the best HPCC 
technologies both the Southern California models, as well as in GEM, such as MPI and 
HPF. This will naturally use the expertise of the NSF Science and Technology center 
CRPC -- the Center for Research in Parallel Computation -- led by Kennedy at Rice where 
Fox is a founding member. The Southern Calfornia model and GEM are both natural 
projects in which to exploit World Wide Web technology where NPAC is a pioneer for 
expoiting both databases (http://asknpac.npac.syr.edu) and distributed computing 
(http://www.npac.syr.edu/factoring.html). The Web naturally links the data and simulation 
aspects of GEM and this integration is at the heart of a joint MARINER-NPAC project 
WebWork where Fox is collaborating with Marina Chen -- the head of the computer 
science department at Boston University.

MARINER:

	The MARINER (Metacenter-Affiliated Resource in the New England Region) 
project (established at Boston University in October 1995) provides a focal point for 
HPCC/NII related activities in New England.  As a Metacenter Regional Affiliate, its goal 
is to act as a catalyst for wide diffusion of technologies and as a channel to the Metacenter. 
MARINER will draw on the University's advanced computing and communications 
facilities and build on its broad experience in computational science, scalable parallel 
computing and wide area networking in order to provide education and training to a wider 
audience as well as to facilitate computational science research and education.

	Computational facilities that will be available for this project are the supercomputing 
facilities associated with the Center for Computational Science at Boston University. At 
present these facilities include a 38 processor SGI POWER CHALLENGEarray with a 
peak performance of 13 Gflops, high performance networks and associated visualization 
peripherals, and a laboratory of networked SGI graphics workstations.  The POWER 
CHALLENGE array represents the first phase equipment of Boston University's third 
generation supercomputing facility.  The equipment for the second phase will consist  of 
SGI's next-generation, highly-scalable supercomputing system with about 25 Gflops of 
peak performance. The first phase systems will continue in operation, so that our total 
computational power will increase to about 38 Gflops at that time. About 12 months later, 
the third and final phase will consist of an upgrade to the second phase system, bringing it 
to a peak power of about 65 Gflops, or a total of more than 75 Gflops including the first 
phase systems.  The computing environment also includes numerous high performance 
workstations, HiPPI and FDDI high performance networks in addition to universal 
Ethernet connectivity and a 10Mb per second connection to the NSF backbone. In the near 
future we plan to provide ATM connectivity by routing through the SGI Power Challenge 
machine to a Fore Systems ATM switch. ATM will also be brought to several Indigo-2 
workstations serving scientific computing at the University.


G.	PROJECT PERSONNEL

	G.1	Summary of Past and Present Collaborations

	The challenges posed by the GEM project are at least equal to those found by 
atmospheric scientists working on GCM models, because the physical processes are less 
susceptible to direct observation.  We have therefore brought together an extremely high 
quality team of some of the best theoretical and observational(experimental) geophysicists, 
condensed matter physicists and computational scientists in the country.  These scientists 
have all collaborated in the past in various subsets.

	Rundle and Klein have collaborated since 1987 on a series of papers whose subject 
is the physics of earthquakes, and the relation between slip events and nucleation 
phenomena.  The most recent of these papers was published in Phys. Rev. Lett., 75, 1658-
1661, 1995.  Rundle has also collaborated with Turcotte, Kanamori, Minster, and Stein at 
various times, and has published with all of them on the subject of stress redistribution in 
earthquakes; source processes of earthquakes; and geodetic tests of earthquake models.  
Crutchfield and Rundle are currently developing pattern recognition algorithms for 
seismicity distributions.  Jordan and Minster have a long history of collaboration on plate 
tectonic and other research dating back 25 years to the time they were graduate students 
together at Caltech.  Turcotte and Allegre are long time collaborators in reseach dealing with 
mantle convection and the applicability of Renormalization Group techniques to scaling 
problems in the earth sciences.

	Fox has worked in many interdisciplinary projects during his years at Caltech,  
including neural network activities with Hopfield (unfunded collaborator with Rundle), as 
well as  several other data analysis and simulation projects with geophysicists within the 
Seismological Laboratory. These are described in a recent book "Parallel Computing 
Works!"  Fox is also collaborating with Marina Chen, head of the computer science 
department at Boston University.  Fox has successfully integrated HPCC technologies into 
many applications. The book Parallel Computing Works describes some 50 examples from 
his time at Caltech where he collaborated with the Geophysics group there on seismic 
(Clayton) and mantle (Hager before he left) simulations on parallel computers.  

	Fox has also collaborated with the computer and computational scientists at the 
Advanced Computing Laboratory at Los Alamos National Laboratory.  In particular, two of 
his students (Warren & Salmon) are currently funded by ACL, either directly (Warren) or 
indirectly (Salmon).  These former students played major roles in developing the fast 
multipole algorithms.  Rundle has known Krogh for over a year.  Krogh will work closely 
with the ACL group and will act as an in-house interface between the earth science 
community and the ACL lab.  

	York, Klein and Giles have all collaborated together in the past in carrying large 
scale supercomputer simulations of nonlinear problems.  York and his students are curently 
collaborating with Rundle to port Rundle's 1988 model to the SGI PCA in Fortran 77.  In 
addition, they are implementing primitive surface patch models for use in developing 
sophisticated fault geometries.