Discussion of sparse matrix solvers in the IBM ESSL David Koester 1.) Which existing routines can be parallelized for a distributed MIMD coarse-grain environment. Parallel sparse matrix solvers must be included in any reasonable engineering or scientific library, because solving sparse linear systems of equations practically dominates scientific and engineering computations. Unfortunately, the routines to solve linear systems of equations directly are very general and may not address some significant aspects involved in solving sparse systems of equations. For example --- ordering the matrix to minimize the amount of fillin and consequently the overall amount of calculations --- the general sparse LU factorization routines use Markovitz counts with threshold pivoting to minimize fillin and calculations, however, the direct symmetric solvers say nothing about techniques to maintain numerical stability (pivoting) and say nothing about techniques to minimize calculations (ordering). Parallel library routines, that supply the capability to perform sparse matrix calculations, should definitely be included in a library for distributed MIMD coarse-grain multiprocessors. It is my opinion that efforts should not be expended to parallelize the existing sparse solvers, rather the functionality and possibly the function call structure should be maintained while utilizing existing research codes. The best general multiprocess codes for direct solvers have been developed by Michael Heath, et al and Seshadri Venugopal, et al. Other specialized parallel sparse matrix solvers such as my parallel block-diagonal-bordered solver could also be included. One area where research may be warranted would be to develop an input data-sensitive sparse matrix solver that intelligently selects the proper algorithm to minimize calculations after examining the data. It would not be difficult to detect various matrix structures and select the appropriate algorithm to minimize calculations while maximizing multiprocessor performance. 2.) Application class or classes that would rely on those routines "solving sparse linear systems of equations practically dominates scientific and engineering computations" 3.) Machines for which those routines are currently available Both general and application-dependent research codes exist for many distributed-memory and shared-memory multi-processors. The best general multiprocess codes for direct solvers have been developed by Michael Heath, et al and Seshadri Venugopal, et al. 4.) User interface and usability of the routines. The user interfaces for these routines are somewhat limited - there are limited available input data formats. It may be advantageous to expand the applicability of these routines by expanding the number of user input formats. Presently, there are separate routines for classes of input formats. Other sparse matrix solver libraries simplify the data input choice process by having a single solver for a class of problems with input preprocessing to modify the format of the data to a single canonical form. 5.) Which routines would be used by real people to do real problems "solving sparse linear systems of equations practically dominates scientific and engineering computations" 6.) Distinguish routines which be used on a node only (eg calculate Hermite Polynomials) and which would be used in parallel form (FFT) Parallel sparse solvers would be used in parallel form, although the limited scalability in the parallel data often limits the number of processors that can be efficiently utilized. 7.) Also indicate current state of art (who has done best work) in parallelization The best general multiprocess codes for direct solvers have been developed by Michael Heath, et al and Seshadri Venugopal, et al. For specialized applications where the block-diagonal-bordered form is appropriate, solvers developed in my research may be appropriate to include in a general library.