Subject: eq_model Resent-Date: Mon, 08 Nov 1999 17:07:54 -0500 Resent-From: Geoffrey Fox Resent-To: p_gcf@npac.syr.edu Date: Fri, 05 Nov 1999 17:21:43 -0800 From: "Kenneth J. Hurst" To: Geoffrey Fox Another example of how the GEMCI might be used can be drawn from an attempt to create a computer model of the California seismicity. Such a model, to the extent that it is realistic, could be quite useful in guiding our intuition about the earthquake process, and suggesting new measurements or lines of inquiry. Making such a model realistic requires many different types of data and expertise. A partial list of data types might include: paleoseismic data, GPS data, leveling data, strain data, strong motion data, regional seismicity data, downhole seismic data, multi-channel seismic data, laboratory measurements of mechanical properties of various rocks, 3-D geologic structure, gravity data, heat flow data, magneto-telluric data, hydrology data, and ocean tide data (to constrain coastal uplift). There are productive scientists who have built entire careers on each individual type of data mentioned above. No individual scientist at this time has access to complete collections of all these data types, nor the detailed expertise which would be required to use all these data in a computational model. Consequently, either the models we will develop will be based on some subset of the data, and we will crow from the rooftops whenever we manage to serendipitously successfully model other data subsets, or we must develop an environment in which people can collaborate to construct and test models which span multiple data sets and which can then be embellished by other investigators. To maximize the utility of any given data set, the people who collect and care about the data must be the ones who archive it and make it available. Attempts to construct monolithic databases of diverse earthscience data using dedicated people and facilities have met with limited success. It appears to work better if the people or community who "own" the data are the ones who archive it. Thus, we need an environment which will allow easy collaboration and access to diverse distributed data sets. To return to our example of modeling the seismicity of California, an investigator might start with a map of faults in the region, a constitutive law for determining the slip on those faults derived from laboratory measurements, and some sort of loading criteria perhaps derived from GPS measurements. From these inputs, a model is constructed which reproduces the statistics of rupture as shown by paleoseismicity data and earthquake catalogs. Subsequently, another investigator takes that model and adds pore fluid pressure modeling in order to investigate the effects of fluids on the nucleation of earthquakes and the migration of seismicity. This process is repeated with other investigators collaborating and developing competing and/or complementary models. The models can be benchmarked against other data sets and against each other. This kind of an environment will enable us to make the most rapid possible progress at understanding the earthquake processes. Ken Hurst Mail Stop 238-600 voice: 818-354-6637 Jet Propulsion Lab / Caltech FAX: 818-393-4965 Pasadena, CA 91109 hurst@cobra.jpl.nasa.gov