next up previous
Next: Direct Methods Up: Parallel Block-Diagonal-Bordered Sparse Linear Previous: Conclusions

Conclusions

In this thesis, we have presented research into parallel linear solvers for block-diagonal-bordered sparse matrices. The block-diagonal-bordered form identifies parallelism that can be exploited for both direct and iterative linear solvers. Direct methods obtain the exact solution for in a finite number of operations, whereas iterative methods calculate sequences of approximations that may or may not converge to the solution. In order to compare performance for parallel sparse direct and iterative linear solvers for power systems network applications, we have developed parallel block-diagonal-bordered sparse direct methods based on LU factorization and Choleski factorization algorithms, and we have developed a parallel block-diagonal-bordered sparse iterative linear solver based on the Gauss-Seidel method. We are examining parallel sparse linear solvers for embedded power systems applications, so our direct solver implementations also require parallel forward reduction and backward substitution algorithms. The parallel block-diagonal-bordered sparse linear solvers for power systems network applications have proven to be rather sensitive to computation-to-communications ratio or granularity.

This research has focused on block-diagonal-bordered form matrices, that are generated by tearing networks into mutually independent partitions by using diakoptic techniques. We have described how the power-systems-analysis-oriented diakoptic node-tearing techniques relate to the state-of-the-art in parallel sparse matrix algorithms. Using the node-tearing-based matrix ordering techniques, we have been able to partition power systems network matrices into highly parallel subgraphs that can be further ordered to balance workloads and to provide parallelism throughout all segments of the calculations.





David P. Koester
Sun Oct 22 17:27:14 EDT 1995