Optimization tasks are common and often very difficult. In most areas of science and engineering, from free energy minimization in physics to profit maximization in economics, the need to optimize is ubiquitous. From these optimization tasks comes a class of problems termed ``combinatorial optimization'' which is among the hardest of common computational problems: the solution time can grow exponentially with the size of the problem. Examples arise in scheduling and circuit layout design, to name a few areas. For instance, in course scheduling, we attempt to find an assignment of rooms, time slots, students and faculty to each class to satisfy restrictions (or constraints) such that no two classes are in the same room at the same time. Similar scheduling problems arise in assigning tasks to machines in a manufacturing environment or experiments on spacecraft with constraints on power use. Fundamentally, these kind of problems consist of finding those combinations or subsets of a discrete set of items that satisfy specified constraints. These satisfactory subsets are the possible solutions to the overall problem. In the course scheduling problem, for instance, the discrete set could consist of all possible ways to assign classes to rooms, time slots, etc. and the task then viewed as selecting a subset of these possibilities that satisfies the constraints of the schedule. These problems are thus conceptually very simple to solve but can be extremely time consuming to derive an optimal solution in practice. This is because the number of possible combination to consider grows very rapidly with the number of items, leading to potentially lengthy solution times and severely limiting the feasible size of such problems.
In light of the importance and difficulty of combinatorial search, much effort has gone into developing effective algorithms for finding good optima. The methods of simulated annealing (SA), genetic algorithms (GA), and tabu search (TS) are three of the most popular methods, inspired by ideas from statistical mechanics, evolutionary biology, and artificial intelligence respectively. All of these techniques rely in part on constructing improved solutions by applying a local operator to a population of candidate solutions. Good solutions result from the accumulation of many beneficial local modifications applied one after another. Other not so widely used algorithms to tackle the abovementioned problems are those of neural networks (NN) and Lagrangean relaxation (LR), inspired by ideas from neurobiology and integer (or mixed-integer) programming respectively. Among the most well-known conventional methods are those of Lin-Kernighan (LK) and linear programming (LP), which are widely used very effectively in this domain. As a side note, since most of the published work we are aware of on these methods involve tackling the traveling salesman problem (TSP), we will make a reference to this problem whenever is needed throughout this roadmap.
Our focus in this roadmap is on discrete optimization and its associated optimization methods. Figure 1 shows an overview of the entire map.
Figure 1: The discrete optimization roadmap.