Next: Ordering
Up: The Preprocessing Phase
Previous: Load Balancing
For parallel sparse block-diagonal-bordered
iterative methods to be efficient when solving irregular sparse
matrices, we must:
- order the matrix into block-diagonal-bordered form while minimizing the size of the last diagonal block,
- order the last diagonal block using multi-coloring techniques.
After performing the first ordering step, we must:
- pseudo-solve to identify the workload for each diagonal block and corresponding portion of the borders,
- load balance to distribute the workload uniformly among processors.
As with the preprocessing step for direct methods, the metric for load
balancing must be based on the actual workload required by the
individual processors. This number may differ substantially from the
number of equations assigned to processors because the number of
calculations in a matrix partition is a function of the number of
non-zero matrix elements in that block --- not the number of equations
in a block. For dense matrices, the computational complexity of the
Gauss-Seidel algorithm is
; however, the computational
complexity of calculating
for sparse power systems
matrices is only slightly greater than
. Determining the
actual workload requires calculating the number of multiply/addition
operations in each matrix block, which we refer to as the
pseudo-solution.
When performing the second ordering step, we must consider that there
are a limited number of calculations in the last diagonal block and
communications requirements for this portion of the algorithm are
extensive. In the worst case, all values calculated on a processor
must be broadcast to all other processors --- resulting in significant
communications overhead. Consequently, we load-balance this ordering
step as a function of the amount of communications, and make this load
balancing step integral with the graph multi-coloring algorithm.
David P. Koester
Sun Oct 22 17:27:14 EDT 1995