next up previous
Next: Ordering the Matrix Up: A Parallel Gauss-Seidel Algorithm Previous: Parallelism in Multi-Colored

The Preprocessing Phase

In the previous section, we developed the theoretical foundations of parallel Gauss-Seidel methods with block-diagonal-bordered sparse matrices, and now we will discuss the procedures required to generate the permutation matrices, , to produce block-diagonal-bordered/multi-colored sparse matrices so that our parallel Gauss-Seidel algorithm is efficient. We must reiterate that all parallelism for our Gauss-Seidel algorithm is identified from the interconnection structure of elements in the sparse matrix during this preprocessing phase. We must order the sparse matrix in such a manner that processor loads are balanced. The technique we have chosen for this preprocessing phase is to:

  1. order the matrix into block-diagonal-bordered form while minimizing the size of the last diagonal block,
  2. order the last diagonal block using multi-coloring techniques.
Inherent in both preprocessing steps is explicit load-balancing to determine processor/data mappings for efficient implementation of the Gauss-Seidel algorithm.

This preprocessing phase incurs significantly more overhead than solving a single instance of the sparse matrix; consequently, the use of this technique will be limited to problems that have static matrix structures that can reuse the ordered matrix and load balanced processor assignments multiple times in order to amortize the cost of the preprocessing phase over numerous matrix solutions.





David P. Koester
Sun Oct 22 15:29:26 EDT 1995