Our overall results are shown in Tables 3 and 4. These tables show the percentage of classes that could be scheduled in accordance with the hard constraints. In each case (apart from the expert system, which is purely deterministic), we have done 10 runs with the same parameters (just different random numbers), and the tables show the average of the 10 runs, as well as the best and worst results.
As expected, each of the methods did much better for the third (summer) semester data, which is much sparser. Our results also confirm what we expected for the different cooling schedules for simulated annealing, in that adaptive cooling performs better than geometric cooling, and reheating improves the result even further.
When a random initial configuration is used, simulated annealing performs very poorly, even worse than the expert system. However, there is a dramatic improvement in performance when a preprocessor is used to provide a good starting point for the annealing. In that case, using the best cooling schedule of adaptive cooling with reheating as a function of cost, we are able to find a feasible class schedule every time. None of the other cooling schedules were able to consistently produce feasible schedules.
Student preferences are included only as medium constraints in our
implementation, meaning that these do not have to be satisfied for a valid
solution, but they have a high priority.
For the valid schedules we have produced, approximately of the student
preferences were satisfied for the first two semesters, and
for the
third semester.
This is quite a good result, particularly since other automated approaches do
not deal with student preferences at all, but we are aiming to improve upon it.