next up previous
Next: Ordering Sparse Matrices Up: Iterative Methods Previous: Iterative Methods

Gauss-Seidel

Common iterative methods for general matrices include the Gauss-Jacobi and Gauss-Seidel, while conjugate gradient methods exist for positive definite matrices. Critical in the choice and use of iterative methods is the convergence of the technique. Gauss-Jacobi uses all values calculated in the previous iteration, while Gauss-Seidel requires that the most recent values calculated be used in the present iteration. The Gauss-Seidel method generally has better convergence than the Gauss-Jacobi method, although for dense matrices, the Gauss-Seidel method is inherently sequential. Better convergence means fewer iterations, and a faster overall algorithm, as long as the strict precedence rules can be observed. The convergence of the iterative method must be examined for the application along with algorithm performance to ensure that a useful solution to can be found.

The Gauss-Seidel method can be written as:

 

 where:¯

is the unknown in during the iteration, and ,

is the initial guess for the unknown in ,

is the coefficient of in the row and column,

is the value in .

or

 

 where:¯

is the iterative solution to , ,

is the initial guess at ,

is the diagonal of ,

is the strictly lower triangular portion of ,

is the strictly upper triangular portion of ,

is right-hand-side vector.

The representation in equation gif is used in the development of the parallel algorithm, while the equivalent matrix-based representation in equation gif is used below in discussions of available parallelism.

We present a general sequential sparse Gauss-Seidel algorithm in figure gif. Only non-zero values in are used when calculating . This algorithm calculates a constant number of iterations before checking for convergence. For very sparse matrices, such as power systems matrices, the computational complexity of the section of the algorithm which checks convergence is , nearly the same as that of a new iteration of . Consequently, we perform multiple iterations between convergence checks.

 
Figure: Sparse Gauss-Seidel Algorithm 

It is very difficult to determine if one-step iterative methods, like the Gauss-Seidel method, converge for general matrices. Nevertheless, for some classes of matrices, it is possible to prove Gauss-Seidel methods do converge and yield the unique solution for with any initial starting vector . Reference [23] proves theorems to show that this holds for both diagonally dominant and symmetric positive definite matrices. The proofs of these theorems state that the Gauss-Seidel method will converge for these matrix types; however, there is no evidence as to the rate of convergence --- the rate of convergence is data dependent.

It may be possible to improve the convergence rate of iterative methods such as Gauss-Seidel by using preconditioning techniques such as incomplete LU factorization. This preconditioning technique performs operations similar to an LU factorization, but no calculations are performed that would generate fillin [23]. The use of preconditioners for parallel Gauss-Seidel algorithms raises many questions over and above the convergence performance improvement that may be possible for power systems network matrices. These questions deal with compromises in available parallelism, effective load-balancing, and matrix ordering priorities between the two distinct algorithms. Implementation and testing of parallel preconditioners for parallel Gauss-Seidel linear solvers has not been examined in this work, although due to its potential impact on parallel iterative solver performance, we note that such algorithms should be examined in future research.



next up previous
Next: Ordering Sparse Matrices Up: Iterative Methods Previous: Iterative Methods



David P. Koester
Sun Oct 22 17:27:14 EDT 1995