Forwarded: Mon, 19 Apr 1999 14:36:32 -0400 Forwarded: linda@rice.edu Replied: Mon, 19 Apr 1999 14:36:09 -0400 Replied: John Salmon Received: from fishnet.caltech.edu (IDENT:johns@grouper.fishnet.caltech.edu [131.215.147.2]) by postoffice.npac.syr.edu (8.9.3/8.9.3) with ESMTP id NAA14578 for ; Mon, 19 Apr 1999 13:15:32 -0400 (EDT) Received: (from johns@localhost) by fishnet.caltech.edu (8.8.7/8.8.7) id JAA07783; Mon, 19 Apr 1999 09:12:51 -0700 Date: Mon, 19 Apr 1999 09:12:51 -0700 Message-Id: <199904191612.JAA07783@fishnet.caltech.edu> X-Authentication-Warning: grouper.fishnet.caltech.edu: johns set sender to johns@fishnet.caltech.edu using -f From: John Salmon To: gcf@npac.syr.edu Subject: abstract Content-Type: text Content-Length: 3340 Here it is. I thought about disappearing to the Far East, but there just weren't any flights available to Cambodia this time :-) John ===File ~/crpc/cosmo.html===================================

Computational Cosmology

A very brief review of the problem domain emphasizes the different length scales and relevant physics. Length scales encompass stars, galaxies, clusters, filaments, sheets and voids. Observations provide redshifts, ages, masses, and correlations in position and velocity. The "missing mass" implies a dominant dark matter component. Theory allows for a density parameter, a cosmological constant and various flavors of dark matter in addition to "normal" baryonic matter. At the largest scales, Newtonian gravity dominates and numerical methods that follow gravitating particles are good models of the real Universe. Analytic approaches are hampered by the nonlinearity, positive feedback and negative heat-capacity of gravitating systems.

Supercomputers have played a significant role for at least the last 30 years in understanding the large scale structure of the Universe. Computational methods include Poisson solvers, O(N2), and "Fast" (O(NlgN) and O(N)) methods. Parallel implementations exist for each of these algorithmic approaches. Poisson solvers can use FFT methods or multi-grid (both of which exist in parallel). O(N2) methods have perhaps the best parallel efficiency of any non-trivial parallel application (Tcomm/Tcalc is proportional to P/N). At least two approaches to Fast methods have been shown to be workable. Orthogonal Recursive Bisection with explicit assembly of "locally essential" data works when the locally essential data is predictable in advance. It has difficulty supporting improved sequential algorithms with dynamic "acceptability criteria", however. A new approach based on space-filling curves is required. It has the additional benefit of being "cache friendly". It also naturally supports both out-of-core and parallel computations which is important because one consequence of "Fast"ness is that one tends to run out of memory before one runs out of patience.

Computation has allowed us to understand the implications of different scenarios and parameter sets, including the effects of the input power spectrum of fluctuations, the evolution of non-linear clustering and correlation functions, and the effects of the density parameter and the cosmological constant on observational data. Computations continue to be challenged by ever improving observations, but they are important tools for understanding and effectively rule out many scenarios by demonstrating them to be inconistent with observation.

Similar numerical methods (indeed the first "Fast" method) apply to potential and scattering problems for the Poisson, Helmholz and Maxwell equations. Other application areas include electrostatic interactions in chemical dynamics, stress-strain interactions in solid mechanics/geophysics and vortex dynamics from incompressible CFD. ============================================================