Table #table01#508> lists all major components of the data and Tables
#table02#509> and {#table03#510> show the rates of sparseness of the three-semester
data which greatly depends on number of time slots and rooms used. The
level of sparseness increases as the overall size of the data,
particularly, number of rooms, time slots, and number of
classes. Also, after scheduling all the classes, there will be about
#tex2html_wrap_inline1868# spare space-time slots, hence, the
sparseness of the problem is defined as as the ratio #tex2html_wrap_inline1870#. As we can see that Table #table03#513> degree of sparseness is
less than that of Table #table02#514> and the reason is the presence
of the students' preferences.
Our overall results are shown in Tables (#table1#515>, #table2#516>,
#table3#517>, #table31#518>, #table4#519> and
#table41#520>). Table #table1#521> gives a
percentage output of the preprocessor (the expert system) for a
three-semester set of data as well as the highest and lowest
percentage of scheduled data. We also can see that the size of those
scheduled classes increases as the input (number of classes)
decreases, which was not really surprising since the overall number of
rooms and time slots were kept fixed for the three semesters. So the
system does quite well for small sets of classes even when both class
and students constraints were taken into consideration. Unlike Table
#table1#522>, Table #table2#523> shows improved results and that was
mainly due to not taking students constraints into consideration and
only restricting the system to class constraints.
In the case of simulated annealing, Table #table3#524> shows output
in a percentage form averaged over 10 runs for three semesters
without a preprocessor. Clearly, SA using the geometric annealing
schedule shows a rather poor result which is even lower in quality to
that of the expert system of Table #table1#525>. Also, the use of the
adaptive and the cost-based schedules did not improve the results by
much and remained below those of Table #table1#526>. Furthermore, the
cost-based reheating schedule gave an overall
better average result than that of the geometric and the adaptive
schedules for the three semester.
In contrast to the the numbers in Table #table3#527>, Table
#table31#528> shows higher percentages of the three annealing
schedules and it is mainly due to the exclusion of students'
preferences/constraints and only dealing with class constraints.
Table #table4#529> shows quite improved and high quality averaged
results for the three schedules. Such excellent results can only be
attributed to the use of preprocessed input (output of the rule-based
preprocessor) Also the figures were averaged over 10 runs of annealing
and for the cost-based schedule all those runs for the third
semester set yielded a perfect schedule, that is, all classes of the
input set were scheduled satisfying the given class and students
constraints. However, as we can see in Table #table01#530> that the
size of classes of the third semester is quite small relative to the
first two semesters so achieving a perfect schedule for it was not
too difficult.
Furthermore, we obtained even better results for the three semesters
when the class constraints were disjoined from the students
preferences and first dealt with the class constraints. As shown in
Table #table41#531> we achieved perfect schedules for the three
semesters using the cost-based reheating schedule and also a perfect
schedule for the third semester using the adaptive annealing. Students
data and preferences are dealt with in the second stage with respect
to the class schedule obtained in the first stage.
Clearly, this tells us that the use of preprocessing was not only
helpful but quite essential when using methods such as simulated
annealing to deal with complex timetabling problems. Furthermore,
these high quality averaged results state in an unambiguous terms that
a multi-phase approach, one as explained above, gives a much better
results to timetabling problems then a single-phase approach.