next up previous
Next: Available Parallelism Up: A Parallel Gauss-Seidel Algorithm Previous: Power System Applications

The Gauss-Seidel Method

We are considering an iterative solution to the linear system

 

where is an sparse matrix, x and b are vectors of length n, and we are solving for x. Iterative solvers are an alternative to direct methods that attempt to calculate an exact solution to the system of equations. Iterative methods attempt to find a solution to the system of linear equations by repeatedly solving the linear system using approximations to the vector. Iterations continue until the solution is within a predetermined acceptable bound on the error.

Common iterative methods for general matrices include the Gauss-Jacobi and Gauss-Seidel, while conjugate gradient methods exist for positive definite matrices. Critical in the choice and use of iterative methods is the convergence of the technique. Gauss-Jacobi uses all values from the previous iteration, while Gauss-Seidel requires that the most recent values be used in calculations. The Gauss-Seidel method generally has better convergence than the Gauss-Jacobi method, although for dense matrices, the Gauss-Seidel method is inherently sequential. Better convergence means fewer iterations, and a faster overall algorithm, as long as the strict precedence rules can be observed. The convergence of the iterative method must be examined for the application along with algorithm performance to ensure that a useful solution to can be found.

The Gauss-Seidel method can be written as:

 

 where:¯

is the unknown in during the iteration, and ,

is the initial guess for the unknown in ,

is the coefficient of in the row and column,

is the value in .

or

 

 where:¯

is the iterative solution to , ,

is the initial guess at ,

is the diagonal of ,

is the of strictly lower triangular portion of ,

is the of strictly upper triangular portion of ,

is right-hand-side vector.

The representation in equation 2 is used in the development of the parallel algorithm, while the equivalent matrix-based representation in equation 3 is used below in discussions of available parallelism.

We present a general sequential sparse Gauss-Seidel algorithm in figure 1. This algorithm calculates a constant number of iterations before checking for convergence. For very sparse matrices, such as power systems matrices, the computational complexity of the section of the algorithm which checks convergence is , nearly the same as that of a new iteration of . Consequently, we perform multiple iterations between convergence checks. Only non-zero values in are used when calculating .

 
Figure 1: Sparse Gauss-Seidel Algorithm 

It is very difficult to determine if one-step iterative methods, like the Gauss-Seidel method, converge for general matrices. Nevertheless, for some classes of matrices, it is possible to prove Gauss-Seidel methods do converge and yield the unique solution for with any initial starting vector . Reference [4] proves theorems to show that this holds for both diagonally dominant and symmetric positive definite matrices. The proofs of these theorems state that the Gauss-Seidel method will converge for these matrix types, however, there is no evidence as to the rate of convergence.

Symmetric sparse matrices can be represented by graphs with elements in equations corresponding to undirected edges in the graph [6]. Ordering a symmetric sparse matrix is actually little more than changing the labels associated with nodes in an undirected graph. Modifying the ordering of a sparse matrix is simple to perform using a permutation matrix of either zeros or ones that simply generates elementary row and column exchanges. Applying the permutation matrix to the original linear system in equation 1 yields the linear system

 

that is solved using the parallel Gauss-Seidel algorithm. While ordering the matrix greatly simplifies accessing parallelism inherent within the matrix structure, ordering can have an effect on convergence [4]. In section 9, we present empirical data to show that in spite of the ordering to yield parallelism, convergence appears to be rapid for positive definite power systems load-flow matrices.



next up previous
Next: Available Parallelism Up: A Parallel Gauss-Seidel Algorithm Previous: Power System Applications



David P. Koester
Sun Oct 22 15:29:26 EDT 1995