next up previous
Next: Power System Applications Up: Introduction Previous: Embedded Software Applications

Organization of this Thesis

This thesis is organized as follows. In chapter gif, we describe the electrical power system applications that are the basis for this work. We compare the size and sparsity of five sample power systems matrices to other sample matrices used frequently to compare the performance of parallel sparse linear solvers. Pseudo images representing the original sparse power matrices are provided for comparison to other pseudo images that represent the matrices after ordering into block-diagonal-bordered form.

In chapter gif, we briefly review techniques for both direct and iterative linear solvers. We discuss the direct techniques, factorization and forward reduction/backward substitution, and the Gauss-Seidel iterative method. We review the extensive literature describing research into general parallel LU and Choleski factorization algorithms, and the literature describing research into parallel iterative methods. This is followed in chapter gif by a theoretical derivation of the available parallelism in both direct methods and the Gauss-Seidel iterative method when solving block-diagonal-bordered form sparse matrices.

Paramount to exploiting the advantages of either the parallel sparse block-diagonal-bordered direct methods or the parallel sparse block-diagonal-bordered iterative Gauss-Seidel method is the process of ordering the irregular sparse power system matrices into this form in a manner that balances the workload among multi-processors. In chapter gif, we describe the preprocessing phase used to generate matrix ordering for block-diagonal-bordered matrices with uniformly distributed processing load. Both solver types require a three-step process that includes network partitioning, determining the workload in each partition, and an explicit load-balancing step. In this chapter, we introduce pseudo-factorization and we review minimum degree ordering and pigeon-hole load balancing algorithms.

In chapter gif, we describe the Thinking Machines CM-5 implementations of our parallel block-diagonal-bordered sparse LU, Choleski, and Gauss-Seidel algorithms. Analysis of the performance of the ordering techniques and the parallel implementations is presented in chapter 7 for actual power system network matrices from the Boeing-Harwell series, the Electrical Power Research Institute (EPRI), and an electrical utility, the Niagara Mohawk Power Corporation. Comparisons of the direct and iterative methods also are presented in this chapter. Predictions of performance on future parallel architectures are presented in chapter gif. We present our conclusions concerning parallel block-diagonal-bordered linear solvers for electrical power system applications in chapter gif.

In appendix gif, we provide a description of selected terminology used throughout this work. In appendices gif through gif, we present detailed discussions of algorithms used in the preprocessing phase: the node-tearing algorithm developed to order matrices into block-diagonal-bordered form, the minimum degree ordering algorithm, and the graph multi-coloring algorithm developed specifically to order the last diagonal block in the parallel Gauss-Seidel algorithm. In appendix gif, we present pseudo-code descriptions of the parallel algorithms described in chapter gif, and in appendix gif, we present tables of statistics to illustrate the performance of our diakoptics-based power systems network matrix ordering technique.



next up previous
Next: Power System Applications Up: Introduction Previous: Embedded Software Applications



David P. Koester
Sun Oct 22 17:27:14 EDT 1995