TS, originally due to Glover [22,23], is a metaheuristic [12] designed for getting a global optimum to a combinatorial optimization problem, and like SA, is motivated by the observation that not all locally optimal solutions need be good solutions. Thus it might be desirable to modify a pure local optimization algorithm by providing some mechanism that helps the user escape local optima and continue the search further. One such mechanism would simply be to perform repeated runs of a local optimization algorithm, using a randomized starting heuristic to provide different starting solutions. A sketch of the basic ideas of the method follows.
An objective function f has to be minimized on a set X of feasible
solutions. A neighborhood is defined for each solution s in
X. The set X and the definition of the neighborhood
is
defined for each solution s in X. The set X and the definition
of the neighborhood induce a state-space graph G (possibly
infinite). TS is basically an iterative procedure which starts from an
initial feasible solution and tries to reach an optimal solution by
moving step by step in the state-space graph G. Each step consists
of first generating a collection
of solutions in the
neighborhood
of the current solution s, and then moving to
the best solution
in
, even if
. Let
to denote that
is obtained by applying
modification m to solution s. The solutions consecutively visited
in the iterative process induce an oriented path in G. Finding the
best solution in
may sometimes be a nontrivial matter. It may
be necessary to solve the optimization problem
by some heuristic procedure.
A risk of cycling exists as soon as a solution worse than s
is accepted. In order to prevent cycling , modifications which would
bring us back to a previously visited solution should be
forbidden. Although, sometimes it may be useful to come back to an already
visited solution and continue the search in another direction from
there. This is done in TS by keeping a list T containing the last
k modifications (k can be fixed or variable). Whenever a
modification m is made for moving from s to
, m is
introduced in T and its reverse is considered as a tabu.
In regard to tabu moves, it is shown in [21] that moves to
solutions which have not been visited may be tabu. For this reason it
should be possible to cancel the tabu status of a move if it seems
desirable to do so. Here is how it is done. Let s be the current
solution and m a modification which we want to apply to s. Next,
the penalty value or ``penalization'' and a threshold value
are computed as follows: if
, then the
tabu status of m at s is canceled. We can for example define
and
where
is the
best solution encountered so far: that is, the tabu status of m is
canceled if the solution
is better than previous
best solution
. Reference [21] referred to function A as
an aspiration function.
In addition, stopping rules have to be defined. If a lower bound
of the minimum value of f is known then the process may be
interrupted when the value of the current solution is close enough to
. Moreover, the procedure is terminated if no improvement of
the best solution
found so far has been made during a given
number nmax of iterations. Figure 9 shows
a general description of this technique.
Figure 9: A general schema for the tabu search technique.
In summary, the general idea of TS is always to make the best move found, even if that move makes the current solution worse, i.e., is an uphill move. Thus, assuming that all neighbors of the current solution are examined at each step, TS alternates between looking for a local optimum and, once one has been found, identifying the best neighboring solution which is then used as the starting point for a new local optimization phase. If one did just this, however, there would be a risk that the best move from this ``best neighboring'' would take us right back to the local optimum we just left or some other recently visited solution. This is where the tabu in TS comes in. Information about the most recently made moves is kept in one or more tabu list, and this information is used to disqualify new moves that would undo the work of those recent moves. There are a number of other factors involved with the full-blown TS algorithm such as aspiration-level conditions, diversification rules, and intensification rules. For more details on the algorithm see [21,22,23,43]. From the application side, TS has been applied to a number of rather difficult optimization problems such as operational timetabling [44] yielding very satisfactory results.