Parallel Optimization and Automatic Differentiation

John Dennis (director), Christian Bischof, Robert Bixby, Alan Carle, George Corliss, John Elder, Robert Michael Lewis, Jorge More, Danny Sorensen, Richard Tapia, Virginia Torczon, Steve Wright, and Xiaodong Zhang

Optimization calculations are a staple of numerical computation. They are used by industry in planning, design, marketing, and distribution. Rarely does modeling not involve at least one optimization stage; and since the point of modeling is either control or improvement of the process being modeled, a second optimization stage is often required. The CRPC Parallel Optimization group is developing parallel algorithms motivated by applications for optimization calculations on parallel machines.

The overall goal of the Parallel Optimization project is to learn to use parallel computer systems to solve optimization problems of interest to industry and academia in which lack of computing speed or power are the major bottlenecks. Major emphasis is on linear and integer programming and on multidisciplinary design optimization.

One project is the development of an environment for large scale optimization problems that only requires the user to provide code to evaluate a partially separable function. This novel approach eliminates the need to provide the gradient and sparsity pattern; in all other approaches the user is required to provide the gradient and (for a Newton method) the sparsity pattern. This will provide a unique capability that it not available elsewhere.

John Dennis' research is on practical methods for optimization with particular interest in parallel methods for nonlinear optimization of systems described by coupled nonlinear simulations. He is editor-in-chief and founder of the SIAM Journal for Optimization, an advisory editor of Mathematics of Operations Research, and a past co-editor of Mathematical Programming. He has served as chair of the SIAM Activity Group for Optimization, and he served two terms on the SIAM Council. He has done extensive consulting for United States industry, has been a Fullbright Lecturer to Argentina, and has published his algorithm research in several widely disseminated software journals. Dennis has given many short courses and featured addresses at international conferences. He has directed 27 Ph.D. theses, and his students hold positions in industry, government, and academic departments of business, mathematics, computer science, and applied mathematics. His textbook, co-authored with R.B. Schnabel, was published by Prentice-Hall and in Russian translation by Mir.

Linear Programming

Both business and engineering rely heavily on linear programming to find the best solutions under linear constraint criteria. The research group's major emphasis in linear programming over the next five years will be in the development of parallel interior-point methods. Researchers will test ideas for parallelizing interior-point approaches to linear programming through the use of Arnoldi methods developed from the CRPC Linear Algebra project. Work on simplex approaches will concentrate on parallel procedures that exploit special structure, such as block angular adaptive procedures. In addition, airline crew scheduling problems will be tested on an Intel Delta using a parallel steepest-edge simplex algorithm for problems with a large column/row ratio and column homogeneity. One problem that uses this algorithm will have 13,000,000 variables.

Christian Bischof devised the concept of incremental condition estimation, which enabled novel approaches to the computation of rank-revealing factorizations. He also was one of the developers of the public-domain LAPACK linear algebra library. As part of his CRPC-supported work, he is one of the initiators of the ADIFOR project for a portable Fortran automatic differentiation system. Bischof was the first recipient of the Wilkinson Fellowship in Computational Mathematics, awarded by Argonne National Laboratory in 1988. He is the author of more than 50 articles and technical reports.

Multidisciplinary Optimization

A fundamental challenge for computational engineering is MDO or multidisciplinary optimization. MDO is the identification, control, design or analysis of systems and products described by coupled simulations. It is difficult to overemphasize the economic importance of developing this capability that will facilitate rapid prototyping of new designs and decrease product time-to-market. Meeting this challenge is crucial to any realization of agile manufacturing, and to greater customer satisfaction through optimization of life-cycle costs, including maintenance and recycliblity.

The CRPC effort in MDO involves collaborative research with Boeing and with MADIC, a consortium of major aerospace and automobile manufacturers. The major result to date has been a modeling framework for MDO that has led to a reformulation suitable for a parallel solution on a heterogeneous network of parallel supercomputers and workstations.

Automatic Differentiation

For accurate sensitivity information on solvers, researchers in the Parallel Optimization group are utilizing ADIFOR (Automatic Differentiation in Fortran), the CRPC-developed automatic differentiation tool that is a crucial technology for large-scale optimization problems. ADIFOR has convincingly demonstrated that large Fortran codes can be automatically processed to provide the user with valuable derivative information. As a result, iterative processes can be accelerated, model parameters optimized, and the sensitivity of results with respect to initial conditions and control actions quantified.

These beneficial effects have been obtained on trajectory optimization programs (Boeing), optimal design of membrane filtration processes (Rice University), three-dimensional Navier-Stokes codes for transonic flow (NASA), car engine lubrication simulations (General Motors), and biomechanical models of complex human organs (National Institute of Standards and Technology).

Robert Bixby is interested in linear, integer, and combinatorial optimization problems, with a particular emphasis on the solution of large-scale, real-world models. Current projects include the study of cutting-plane techniques (both sequential and parallel) for the Traveling Salesman problem, parallel mixed integer programming, and methods for combining simplex and interior-point methods in the solution of large-scale linear programs. He is Editor-in-Chief of Mathematical Programming.

Nonlinear Programming

An important use of nonlinear optimization is to determine parameters in systems described by differential equations. Nonlinear programming has widespread applications in environmental and energy research as well as in engineering design, where it is the core of modern concurrent engineering.

In system modeling applications, the system's future behavior may be predicted by using past observational data of the system's states to determine its characterizing parameters. This first optimization stage is called the inverse problem solution or parameter identification stage. Once this is done, emphasis shifts to the determination of the design or control parameters that optimize some system performance index.

The group has applied new codes being developed to determine 4,100 spline coefficients for hydraulic conductivity from pressure data. This effort involved using the Intel Delta to solve a 96,000-variable nonlinear programming problem with 92,000 nonlinear constraints.

Integer Programming

Integer programming work focuses on using the structure of particular classes of hard combinatorial problems to define specific types of exact cutting-plane methods that require little or no branching. Given this sequential procedure and the availability of a powerful parallel machine, researchers will "weaken" or step back from the full sequential cutting-plane procedure enough so that the problem is fragmented and the available processors can be used. This approach combines the advantages of a mathematically powerful algorithm with the capability of machines that will have thousands of processors.

Cutting-plane methods embedded in branch-and-bound techniques have been applied with particular success to the Traveling Salesman Problem (TSP). Examples have shown TSP instances that can be solved on five to seven processors of an Intel iPSC/860 hypercube at speeds equaling that of a single-processor CRAY Y-MP. The group is testing this approach on the Intel Delta, as well. Working with researchers at the Center for Discrete Mathematics and Theoretical Computer Science, another NSF Science and Technology Center, CRPC researchers are restructuring the parallel TSP code in order to incorporate the lessons learned in solving a world-record 3038-city problem on a network of heterogeneous workstations. These same collaborators are also building a mixed integer-linear programming code exploiting the work on TSP and other major advances in pure integer programming during the past ten years.

Jorge More is interested in the development of algorithms and software for optimization problems and their performance on vector and distributed-memory architectures. He played the lead role in the development of MINPACK-1, a collection of high-quality optimization subroutines distributed worldwide to more than 500 sites. He is currently working on an expanded version of the MINPACK package, with a focus on large-scale optimization. More is on the editorial boards of Numerische Mathematik and the SIAM Journal on Optimization, and has been on the editorial boards of the SIAM Journal on Numerical Analysis and the SIAM Journal on Scientific and Statistical Computing. He recently finished a book with Stephen Wright, Optimization Software Guide, published by SIAM.