One possible way to rectify the shortcoming of the aforementioned HT
model is to reduce the computational burden being placed on the
network. This can be done by constraining each city (in the TSP case)
to be ``on'' only once, that is by enforcing the constraint , rather than relying on an energy
penalty term to try to hopefully achieve it, as it is done in the HT
original algorithm. This idea was first used and analyzed for the TSP
case, among others, by Peterson and S
derberg
[7]. Essentially, their approach is to replace the N neurons
representing
by a single N-dimensional
Potts neuron using what is referred to as the mean field
annealing (MFA) approach. This generally yields the same final
network equations as the ``neuronal circuit'' approach of [33],
but the exposition is a little bit clearer, as it is laid out in a
statistical mechanics framework
. As stated in the above mapping subsection, one need to
compute all admissible configurations of the problem at hand. For
example, when mapping the TSP onto the HT model, all of the
configurations are admissible, whereas if each city is
restricted to being visited only once, then only
vertices are
admissible. After taking a mean field approximation, saddlepoint
equations are derived, the solutions of which pick
out the dominant states of the network at the current temperature
T.
This approach was not only an improvement over the Hopfield and Tank model but also gave more feasible (and valid) solutions. For a brief overview of the generic ``black box'' MFA algorithm using the Potts neurons, take a look at Figure 12.
In addition to the TSP implementation, Peterson and colleagues extended the technique and applied it to a case of high school scheduling [8,9]. We also have used the same model for class scheduling [10] with, unfortunately, not very impressive results.