|
|
|
@ -270,7 +270,9 @@ parametric over the neighbourhoods they should apply. For example, since
|
|
|
|
|
strategy \mzninline{basic_lns} that applies a neighbourhood only if the current
|
|
|
|
|
status is not \mzninline{START}:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_basic_lns.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
predicate basic_lns(var bool: nbh) = (status()!=START -> nbh);
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
In order to use this predicate with the \mzninline{on_restart} annotation, we
|
|
|
|
|
cannot simply pass \mzninline{basic_lns(uniform_neighbourhood(x, 0.2))}. Calling \mzninline{uniform_neighbourhood} like that would result in a
|
|
|
|
@ -337,7 +339,9 @@ With \mzninline{restart_without_objective}, the restart predicate is now
|
|
|
|
|
responsible for constraining the objective function. Note that a simple
|
|
|
|
|
hill-climbing (for minimisation) can still be defined easily in this context as:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_hill_climbing.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
predicate hill_climbing() = status() != START -> _objective < sol(_objective);
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
It takes advantage of the fact that the declared objective function is available
|
|
|
|
|
through the built-in variable \mzninline{_objective}. A more interesting example
|
|
|
|
@ -350,7 +354,17 @@ solution needs to improve until we are just looking for any improvements. This
|
|
|
|
|
thereby reaching the optimal solution quicker. This strategy is also easy to
|
|
|
|
|
express using our restart-based modelling:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_simulated_annealing.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
predicate simulated_annealing(float: init_temp, float: cooling_rate) =
|
|
|
|
|
let {
|
|
|
|
|
var float: temp;
|
|
|
|
|
} in if status() = START then
|
|
|
|
|
temp = init_temp
|
|
|
|
|
else
|
|
|
|
|
temp = last_val(temp) * (1 - cooling_rate) % cool down
|
|
|
|
|
/\ _objective < sol(_objective) - ceil(log(uniform(0.0, 1.0)) * temp)
|
|
|
|
|
endif;
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
Using the same methods it is also possible to describe optimisation strategies
|
|
|
|
|
with multiple objectives. An example of such a strategy is lexicographic search.
|
|
|
|
@ -361,7 +375,33 @@ same value for the first objective and improve the second objective, or have the
|
|
|
|
|
same value for the first two objectives and improve the third objective, and so
|
|
|
|
|
on. We can model this strategy restarts as such:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_lex_minimize.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
predicate lex_minimize(array[int] of var int: o) =
|
|
|
|
|
let {
|
|
|
|
|
var index_set(o): stage
|
|
|
|
|
array[index_set(o)] of var int: best;
|
|
|
|
|
} in if status() = START then
|
|
|
|
|
stage = min(index_set(o))
|
|
|
|
|
else
|
|
|
|
|
if status() = UNSAT then
|
|
|
|
|
if lastval(stage) < l then
|
|
|
|
|
stage = lastval(stage) + 1
|
|
|
|
|
else
|
|
|
|
|
complete() % we are finished
|
|
|
|
|
endif
|
|
|
|
|
else
|
|
|
|
|
stage = lastval(stage)
|
|
|
|
|
/\ best[stage] = sol(_objective)
|
|
|
|
|
endif
|
|
|
|
|
/\ for(i in min(index_set(o))..stage-1) (
|
|
|
|
|
o[i] = lastval(best[i])
|
|
|
|
|
)
|
|
|
|
|
/\ if status() = SAT then
|
|
|
|
|
o[stage] < sol(_objective)
|
|
|
|
|
endif
|
|
|
|
|
/\ _objective = o[stage]
|
|
|
|
|
endif;
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
The lexicographic objective changes the objective at each stage in the
|
|
|
|
|
evaluation. Initially the stage is 1. Otherwise, is we have an unsatisfiable
|
|
|
|
@ -378,7 +418,27 @@ problem. In these cases we might instead look for a number of diverse solutions
|
|
|
|
|
and allow the user to pick the most acceptable options. The following fragment
|
|
|
|
|
shows a \gls{meta-search} for the Pareto optimality of a pair of objectives:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_pareto_optimal.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
predicate pareto_optimal(var int: obj1, var int: obj2) =
|
|
|
|
|
let {
|
|
|
|
|
int: ms = 1000; % max solutions
|
|
|
|
|
var 0..ms: nsol; % number of solutions
|
|
|
|
|
set of int: SOL = 1..ms;
|
|
|
|
|
array[SOL] of var lb(obj1)..ub(obj1): s1;
|
|
|
|
|
array[SOL] of var lb(obj2)..ub(obj2): s2;
|
|
|
|
|
} in if status() = START then
|
|
|
|
|
nsol = 0
|
|
|
|
|
elseif status() = UNSAT then
|
|
|
|
|
complete() % we are finished!
|
|
|
|
|
elseif
|
|
|
|
|
nsol = sol(nsol) + 1 /\
|
|
|
|
|
s1[nsol] = sol(obj1) /\
|
|
|
|
|
s2[nsol] = sol(obj2)
|
|
|
|
|
endif
|
|
|
|
|
/\ for(i in 1..nsol) (
|
|
|
|
|
obj1 < lastval(s1[i]) \/ obj2 < lastval(s2[i])
|
|
|
|
|
);
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
In this implementation we keep track of the number of solutions found so far
|
|
|
|
|
using \mzninline{nsol}. There is a maximum number we can handle
|
|
|
|
@ -568,7 +628,9 @@ For example, consider the model from \cref{lst:6-basic-complete} again.
|
|
|
|
|
The second block of code (\lrefrange{line:6:x1:start}{line:6:x1:end}) represents
|
|
|
|
|
the decomposition of the expression
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_transformed_partial.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
(status() != START /\ uniform(0.0,1.0) > 0.2) -> x[1] = sol(x[1])
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
which is the result of merging the implication from the \mzninline{basic_lns}
|
|
|
|
|
predicate with the \mzninline{if} expression from
|
|
|
|
@ -580,7 +642,9 @@ is constrained to be true if-and-only-if the random number is greater than
|
|
|
|
|
in the previous solution. Finally, the half-reified constraint in
|
|
|
|
|
\lref{line:6:x1:end} implements
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_transformed_half_reif.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
b3 -> x[1] = sol(x[1])
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
We have omitted the similar code generated for \mzninline{x[2]} to
|
|
|
|
|
\mzninline{x[n]}. Note that the \flatzinc\ shown here has been simplified for
|
|
|
|
@ -631,7 +695,10 @@ propagation, \gls{cse} and other simplifications.
|
|
|
|
|
\begin{example}\label{ex:6-incremental}
|
|
|
|
|
Consider the following \minizinc\ fragment:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_incremental.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
constraint x < 10;
|
|
|
|
|
constraint y < x;
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
After evaluating the first constraint, the domain of \mzninline{x} is changed to
|
|
|
|
|
be less than 10. Evaluating the second constraint causes the domain of
|
|
|
|
@ -668,7 +735,12 @@ trailing.
|
|
|
|
|
\begin{example}\label{ex:6-trail}
|
|
|
|
|
Let us look again at the resulting \nanozinc\ code from \cref{ex:4-absreif}:
|
|
|
|
|
|
|
|
|
|
% \mznfile{assets/mzn/6_abs_reif_result.mzn}
|
|
|
|
|
\begin{nzn}
|
|
|
|
|
c @$\mapsto$@ true @$\sep$@ []
|
|
|
|
|
x @$\mapsto$@ mkvar(-10..10) @$\sep$@ []
|
|
|
|
|
y @$\mapsto$@ mkvar(-10..10) @$\sep$@ []
|
|
|
|
|
true @$\mapsto$@ true @$\sep$@ []
|
|
|
|
|
\end{nzn}
|
|
|
|
|
|
|
|
|
|
Assume that we added a choice point before posting the constraint
|
|
|
|
|
\mzninline{c}. Then the trail stores the \emph{inverse} of all modifications
|
|
|
|
@ -676,7 +748,22 @@ trailing.
|
|
|
|
|
\(\mapsfrom\) denotes restoring an identifier, and \(\lhd\) \texttt{+}/\texttt{-}
|
|
|
|
|
respectively denote attaching and detaching constraints):
|
|
|
|
|
|
|
|
|
|
% \mznfile{assets/mzn/6_abs_reif_trail.mzn}
|
|
|
|
|
\begin{nzn}
|
|
|
|
|
% Posted c
|
|
|
|
|
true @$\lhd$@ -[c]
|
|
|
|
|
% Propagated c = true
|
|
|
|
|
c @$\mapsfrom$@ mkvar(0,1) @$\sep$@ []
|
|
|
|
|
true @$\lhd$@ +[c]
|
|
|
|
|
% Simplified bool_or(b1, true) = true
|
|
|
|
|
b2 @$\mapsfrom$@ bool_or(b1, c) @$\sep$@ []
|
|
|
|
|
true @$\lhd$@ +[b2]
|
|
|
|
|
% b1 became unused...
|
|
|
|
|
b1 @$\mapsfrom$@ int_gt(t, y) @$\sep$@ []
|
|
|
|
|
% causing t, then b0 and finally z to become unused
|
|
|
|
|
t @$\mapsfrom$@ z @$\sep$@ [b0]
|
|
|
|
|
b0 @$\mapsfrom$@ int_abs(x, z) @$\sep$@ []
|
|
|
|
|
z @$\mapsfrom$@ mkvar(-infinity,infinity) @$\sep$@ []
|
|
|
|
|
\end{nzn}
|
|
|
|
|
|
|
|
|
|
To reconstruct the \nanozinc\ program at the choice point, we simply apply
|
|
|
|
|
the changes recorded in the trail, in reverse order.
|
|
|
|
@ -710,8 +797,8 @@ therefore support solvers with different levels of an incremental interface:
|
|
|
|
|
\section{Experiments}\label{sec:6-experiments}
|
|
|
|
|
|
|
|
|
|
We have created a prototype implementation of the architecture presented in the
|
|
|
|
|
preceding sections. It consists of a compiler from \minizinc\ to \microzinc, and
|
|
|
|
|
an incremental \microzinc\ interpreter producing \nanozinc. The system supports
|
|
|
|
|
preceding sections. It consists of a compiler from \minizinc\ to \microzinc{}, and
|
|
|
|
|
an incremental \microzinc\ interpreter producing \nanozinc{}. The system supports
|
|
|
|
|
a significant subset of the full \minizinc\ language; notable features that are
|
|
|
|
|
missing are support for set and float variables, option types, and compilation
|
|
|
|
|
of model output expressions and annotations. We will release our implementation
|
|
|
|
@ -732,29 +819,40 @@ offers, we present a runtime evaluation of two meta-heuristics implemented using
|
|
|
|
|
our prototype interpreter. For both meta-heuristics, we evaluate the performance
|
|
|
|
|
of fully re-evaluating and solving the instance from scratch, compared to the
|
|
|
|
|
fully incremental evaluation and solving. The solving in both tests is performed
|
|
|
|
|
by the Gecode solver, version 6.1.2, connected using the fully incremental
|
|
|
|
|
API\@.
|
|
|
|
|
by the \gls{gecode} \gls{solver}, version 6.1.2, connected using the fully
|
|
|
|
|
incremental API\@.
|
|
|
|
|
|
|
|
|
|
\paragraph{GBAC}
|
|
|
|
|
The Generalised Balanced Academic Curriculum (GBAC) problem
|
|
|
|
|
\autocite{chiarandini-2012-gbac} is comprised of scheduling the courses in a
|
|
|
|
|
curriculum subject to load limits on the number of courses for each period,
|
|
|
|
|
prerequisites for courses, and preferences of teaching periods by teaching
|
|
|
|
|
staff. It has been shown~\autocite{dekker-2018-mzn-lns} that Large Neighbourhood
|
|
|
|
|
Search (\gls{lns}) is a useful meta-heuristic for quickly finding high quality
|
|
|
|
|
solutions to this problem. In \gls{lns}, once an initial (sub-optimal) solution is
|
|
|
|
|
found, constraints are added to the problem that restrict the search space to a
|
|
|
|
|
\textit{neighbourhood} of the previous solution. After this neighbourhood has
|
|
|
|
|
been explored, the constraints are removed, and constraints for a different
|
|
|
|
|
neighbourhood are added. This is repeated until a sufficiently high solution
|
|
|
|
|
quality has been reached.
|
|
|
|
|
\paragraph{\glsentrytext{gbac}} %
|
|
|
|
|
The \glsaccesslong{gbac} problem \autocite{chiarandini-2012-gbac} consists of
|
|
|
|
|
scheduling the courses in a curriculum subject to load limits on the number of
|
|
|
|
|
courses for each period, prerequisites for courses, and preferences of teaching
|
|
|
|
|
periods by teaching staff. It has been shown~\autocite{dekker-2018-mzn-lns} that
|
|
|
|
|
Large Neighbourhood Search (\gls{lns}) is a useful meta-heuristic for quickly
|
|
|
|
|
finding high quality solutions to this problem. In \gls{lns}, once an initial
|
|
|
|
|
(sub-optimal) solution is found, constraints are added to the problem that
|
|
|
|
|
restrict the search space to a \textit{neighbourhood} of the previous solution.
|
|
|
|
|
After this neighbourhood has been explored, the constraints are removed, and
|
|
|
|
|
constraints for a different neighbourhood are added. This is repeated until a
|
|
|
|
|
sufficiently high solution quality has been reached.
|
|
|
|
|
|
|
|
|
|
We can model a neighbourhood in \minizinc\ as a predicate that, given the values
|
|
|
|
|
of the variables in the previous solution, posts constraints to restrict the
|
|
|
|
|
search. The following predicate defines a suitable neighbourhood for the GBAC
|
|
|
|
|
problem:
|
|
|
|
|
search. The following predicate defines a suitable neighbourhood for the
|
|
|
|
|
\gls{gbac} problem:
|
|
|
|
|
|
|
|
|
|
\mznfile{assets/mzn/6_gbac_neighbourhood.mzn}
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
predicate random_allocation(array[int] of int: sol) =
|
|
|
|
|
forall(i in courses) (
|
|
|
|
|
(uniform(0,99) < 80) -> (period_of[i] == sol[i])
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
predicate free_period() =
|
|
|
|
|
let {
|
|
|
|
|
int: period = uniform(periods)
|
|
|
|
|
} in forall(i in courses where sol(period_of[i]) != period) (
|
|
|
|
|
period_of[i] = sol(period_of[i])
|
|
|
|
|
);
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
When this predicate is called with a previous solution \mzninline{sol}, then
|
|
|
|
|
every \mzninline{period_of} variable has an \(80\%\) chance to be fixed to its
|
|
|
|
@ -762,14 +860,14 @@ value in the previous solution. With the remaining \(20\%\), the variable is
|
|
|
|
|
unconstrained and will be part of the search for a better solution.
|
|
|
|
|
|
|
|
|
|
In a non-incremental architecture, we would re-flatten the original model plus
|
|
|
|
|
the neighbourhood constraint for each iteration of the \gls{lns}. In the incremental
|
|
|
|
|
\nanozinc\ architecture, we can easily express \gls{lns} as a repeated addition and
|
|
|
|
|
retraction of the neighbourhood constraints. We implemented both approaches
|
|
|
|
|
using the \nanozinc\ prototype, with the results shown in \Cref{fig:6-gbac}. The
|
|
|
|
|
incremental \nanozinc\ translation shows a 12x speedup compared to re-compiling
|
|
|
|
|
the model from scratch in each iteration. For this particular problem,
|
|
|
|
|
incrementality in the target solver (Gecode) does not lead to a significant
|
|
|
|
|
reduction in runtime.
|
|
|
|
|
the neighbourhood constraint for each iteration of the \gls{lns}. In the
|
|
|
|
|
incremental \nanozinc\ architecture, we can easily express \gls{lns} as a
|
|
|
|
|
repeated addition and retraction of the neighbourhood constraints. We
|
|
|
|
|
implemented both approaches using the \nanozinc\ prototype, with the results
|
|
|
|
|
shown in \Cref{fig:6-gbac}. The incremental \nanozinc\ translation shows a 12x
|
|
|
|
|
speedup compared to re-compiling the model from scratch in each iteration. For
|
|
|
|
|
this particular problem, incrementally instructing the target solver
|
|
|
|
|
(\gls{gecode}) does not lead to a significant reduction in runtime.
|
|
|
|
|
|
|
|
|
|
\begin{figure}
|
|
|
|
|
\centering
|
|
|
|
@ -791,16 +889,21 @@ important than the second. The problem therefore has a lexicographical
|
|
|
|
|
objective: a solution is better if it requires a strictly shorter exposure time,
|
|
|
|
|
or the same exposure time but a lower number of ``shots''.
|
|
|
|
|
|
|
|
|
|
\minizinc\ solvers do not support lexicographical objectives directly, but we
|
|
|
|
|
can instead repeatedly solve a model instance and add a constraint to ensure
|
|
|
|
|
that the lexicographical objective improves. When the solver proves that no
|
|
|
|
|
better solution can be found, the last solution is known to be optimal. Given
|
|
|
|
|
\minizinc\ \glspl{solver} do not support lexicographical objectives directly,
|
|
|
|
|
but we can instead repeatedly solve a model instance and add a constraint to
|
|
|
|
|
ensure that the lexicographical objective improves. When the solver proves that
|
|
|
|
|
no better solution can be found, the last solution is known to be optimal. Given
|
|
|
|
|
two variables \mzninline{exposure} and \mzninline{shots}, once we have found a
|
|
|
|
|
solution with \mzninline{exposure=e} and \mzninline{shots=s}, we can add the
|
|
|
|
|
constraint \mzninline{exposure < e \/ (exposure = e /\ shots < s)}, expressing
|
|
|
|
|
the lexicographic ordering, and continue the search. Since each added
|
|
|
|
|
lexicographic constraint is strictly stronger than the previous one, we never
|
|
|
|
|
have to retract previous constraints.
|
|
|
|
|
constraint
|
|
|
|
|
|
|
|
|
|
\begin{mzn}
|
|
|
|
|
constraint exposure < e \/ (exposure = e /\ shots < s)
|
|
|
|
|
\end{mzn}
|
|
|
|
|
|
|
|
|
|
expressing the lexicographic ordering, and continue the search. Since each
|
|
|
|
|
added lexicographic constraint is strictly stronger than the previous one, we
|
|
|
|
|
never have to retract previous constraints.
|
|
|
|
|
|
|
|
|
|
\begin{figure}
|
|
|
|
|
\centering
|
|
|
|
@ -836,26 +939,28 @@ specifications can (a) be effective and (b) incur only a small overhead compared
|
|
|
|
|
to a dedicated implementation of the neighbourhoods.
|
|
|
|
|
|
|
|
|
|
To measure the overhead, we implemented our new approach in
|
|
|
|
|
Gecode~\autocite{gecode-2021-gecode}. The resulting solver (\gecodeMzn\ in the tables
|
|
|
|
|
below) has been instrumented to also output the domains of all model variables
|
|
|
|
|
after propagating the new special constraints. We implemented another extension
|
|
|
|
|
to Gecode (\gecodeReplay) that simply reads the stream of variable domains for
|
|
|
|
|
each restart, essentially replaying the \gls{lns} of \gecodeMzn\ without incurring any
|
|
|
|
|
overhead for evaluating the neighbourhoods or handling the additional variables
|
|
|
|
|
and constraints. Note that this is a conservative estimate of the overhead:
|
|
|
|
|
\gecodeReplay\ has to perform \emph{less} work than any real \gls{lns} implementation.
|
|
|
|
|
\gls{gecode}~\autocite{gecode-2021-gecode}. The resulting solver (\gecodeMzn{} in
|
|
|
|
|
the tables below) has been instrumented to also output the domains of all model
|
|
|
|
|
variables after propagating the new special constraints. We implemented another
|
|
|
|
|
extension to \gls{gecode} (\gecodeReplay) that simply reads the stream of variable
|
|
|
|
|
domains for each restart, essentially replaying the \gls{lns} of \gecodeMzn\
|
|
|
|
|
without incurring any overhead for evaluating the neighbourhoods or handling the
|
|
|
|
|
additional variables and constraints. Note that this is a conservative estimate
|
|
|
|
|
of the overhead: \gecodeReplay\ has to perform \emph{less} work than any real
|
|
|
|
|
\gls{lns} implementation.
|
|
|
|
|
|
|
|
|
|
In addition, we also present benchmark results for the standard release of
|
|
|
|
|
Gecode 6.0 without \gls{lns} (\gecodeStd); as well as \chuffedStd, the development
|
|
|
|
|
version of Chuffed; and \chuffedMzn, Chuffed performing \gls{lns} with FlatZinc
|
|
|
|
|
neighbourhoods. These experiments illustrate that the \gls{lns} implementations indeed
|
|
|
|
|
perform well compared to the standard solvers.\footnote{Our implementations are
|
|
|
|
|
available at
|
|
|
|
|
\texttt{\justify{}https://github.com/Dekker1/\{libminizinc,gecode,chuffed\}} on branches
|
|
|
|
|
containing the keyword \texttt{on\_restart}.} All experiments were run on a
|
|
|
|
|
single core of an Intel Core i5 CPU @ 3.4 GHz with 4 cores and 16 GB RAM running
|
|
|
|
|
MacOS High Sierra. \gls{lns} benchmarks are repeated with 10 different random seeds
|
|
|
|
|
and the average is shown. The overall timeout for each run is 120 seconds.
|
|
|
|
|
\gls{gecode} 6.0 without \gls{lns} (\gecodeStd); as well as \chuffedStd{}, the
|
|
|
|
|
development version of Chuffed; and \chuffedMzn{}, Chuffed performing \gls{lns}
|
|
|
|
|
with \flatzinc\ neighbourhoods. These experiments illustrate that the \gls{lns}
|
|
|
|
|
implementations indeed perform well compared to the standard
|
|
|
|
|
solvers.\footnote{Our implementations are available at
|
|
|
|
|
\texttt{\justify{}https://github.com/Dekker1/\{libminizinc,gecode,chuffed\}} on
|
|
|
|
|
branches containing the keyword \texttt{on\_restart}.} All experiments were run
|
|
|
|
|
on a single core of an Intel Core i5 CPU @ 3.4 GHz with 4 cores and 16 GB RAM
|
|
|
|
|
running macOS High Sierra. \gls{lns} benchmarks are repeated with 10 different
|
|
|
|
|
random seeds and the average is shown. The overall timeout for each run is 120
|
|
|
|
|
seconds.
|
|
|
|
|
|
|
|
|
|
We ran experiments for three models from the MiniZinc
|
|
|
|
|
challenge~\autocite{stuckey-2010-challenge, stuckey-2014-challenge} (\texttt{gbac},
|
|
|
|
@ -869,7 +974,7 @@ percentage (\%), which is shown as the superscript on \(\minobj\) when running
|
|
|
|
|
\gls{lns}.
|
|
|
|
|
%and the average number of nodes per one second (\nodesec).
|
|
|
|
|
The underlying search strategy used is the fixed search strategy defined in the
|
|
|
|
|
model. For each model we use a round robin evaluation (\cref{lst:6-round-robin})
|
|
|
|
|
model. For each model we use a round-robin evaluation (\cref{lst:6-round-robin})
|
|
|
|
|
of two neighbourhoods: a neighbourhood that destroys \(20\%\) of the main decision
|
|
|
|
|
variables (\cref{lst:6-lns-minisearch-pred}) and a structured neighbourhood for
|
|
|
|
|
the model (described below). The restart strategy is
|
|
|
|
@ -884,14 +989,14 @@ the model (described below). The restart strategy is
|
|
|
|
|
courses in a period.}
|
|
|
|
|
\end{listing}
|
|
|
|
|
|
|
|
|
|
The Generalised Balanced Academic Curriculum problem comprises courses having a
|
|
|
|
|
specified number of credits and lasting a certain number of periods, load limits
|
|
|
|
|
of courses for each period, prerequisites for courses, and preferences of
|
|
|
|
|
teaching periods for professors. A detailed description of the problem is given
|
|
|
|
|
The \gls{gbac} problem comprises courses having a specified number of credits
|
|
|
|
|
and lasting a certain number of periods, load limits of courses for each period,
|
|
|
|
|
prerequisites for courses, and preferences of teaching periods for professors. A
|
|
|
|
|
detailed description of the problem is given
|
|
|
|
|
in~\autocite{chiarandini-2012-gbac}. The main decisions are to assign courses to
|
|
|
|
|
periods, which is done via the variables \mzninline{period_of} in the model.
|
|
|
|
|
\cref{lst:6-free-period} shows the neighbourhood chosen, which randomly picks one
|
|
|
|
|
period and frees all courses that are assigned to it.
|
|
|
|
|
\cref{lst:6-free-period} shows the neighbourhood chosen, which randomly picks
|
|
|
|
|
one period and frees all courses that are assigned to it.
|
|
|
|
|
|
|
|
|
|
\begin{table}
|
|
|
|
|
\centering
|
|
|
|
@ -900,10 +1005,11 @@ period and frees all courses that are assigned to it.
|
|
|
|
|
\end{table}
|
|
|
|
|
|
|
|
|
|
The results for \texttt{gbac} in \cref{tab:6-gbac} show that the overhead
|
|
|
|
|
introduced by \gecodeMzn\ w.r.t.~\gecodeReplay\ is quite low, and both their
|
|
|
|
|
results are much better than the baseline \gecodeStd. Since learning is not very
|
|
|
|
|
effective for \texttt{gbac}, the performance of \chuffedStd\ is inferior to
|
|
|
|
|
Gecode. However, \gls{lns} again significantly improves over standard Chuffed.
|
|
|
|
|
introduced by \gecodeMzn\ w.r.t. \gecodeReplay{} is quite low, and both their
|
|
|
|
|
results are much better than the baseline \gecodeStd{}. Since learning is not
|
|
|
|
|
very effective for \gls{gbac}, the performance of \chuffedStd\ is inferior to
|
|
|
|
|
\gls{gecode}. However, \gls{lns} again significantly improves over standard
|
|
|
|
|
Chuffed.
|
|
|
|
|
|
|
|
|
|
\subsubsection{\texttt{steelmillslab}}
|
|
|
|
|
|
|
|
|
@ -918,10 +1024,10 @@ so that all orders are fulfilled while minimising the wastage. The steel mill
|
|
|
|
|
only produces slabs of certain sizes, and orders have both a size and a colour.
|
|
|
|
|
We have to assign orders to slabs, with at most two different colours on each
|
|
|
|
|
slab. The model uses the variables \mzninline{assign} for deciding which order
|
|
|
|
|
is assigned to which slab. \cref{lst:6-free-bin} shows a structured neighbourhood
|
|
|
|
|
that randomly selects a slab and frees the orders assigned to it in the
|
|
|
|
|
incumbent solution. These orders can then be freely reassigned to any other
|
|
|
|
|
slab.
|
|
|
|
|
is assigned to which slab. \cref{lst:6-free-bin} shows a structured
|
|
|
|
|
neighbourhood that randomly selects a slab and frees the orders assigned to it
|
|
|
|
|
in the incumbent solution. These orders can then be freely reassigned to any
|
|
|
|
|
other slab.
|
|
|
|
|
|
|
|
|
|
\begin{table}
|
|
|
|
|
\centering
|
|
|
|
@ -935,7 +1041,7 @@ optimal solutions. As \cref{tab:6-steelmillslab} shows, \gecodeMzn\ is again
|
|
|
|
|
slightly slower than \gecodeReplay\ (the integral is slightly larger). While
|
|
|
|
|
\chuffedStd\ significantly outperforms \gecodeStd\ on this problem, once we use
|
|
|
|
|
\gls{lns}, the learning in \chuffedMzn\ is not advantageous compared to
|
|
|
|
|
\gecodeMzn\ or \gecodeReplay. Still, \chuffedMzn\ outperforms \chuffedStd\ by
|
|
|
|
|
\gecodeMzn\ or \gecodeReplay{}. Still, \chuffedMzn\ outperforms \chuffedStd\ by
|
|
|
|
|
always finding an optimal solution.
|
|
|
|
|
|
|
|
|
|
% RCPSP/wet
|
|
|
|
@ -960,15 +1066,15 @@ that time interval, which allows a reshuffling of these tasks.
|
|
|
|
|
\begin{table}[b]
|
|
|
|
|
\centering
|
|
|
|
|
\input{assets/table/6_rcpsp-wet}
|
|
|
|
|
\caption{\label{tab:6-rcpsp-wet}\texttt{rcpsp-wet} benchmarks}
|
|
|
|
|
\caption{\label{tab:6-rcpsp-wet}\texttt{rcpsp-wet} benchmarks.}
|
|
|
|
|
\end{table}
|
|
|
|
|
|
|
|
|
|
\cref{tab:6-rcpsp-wet} shows that \gecodeReplay\ and \gecodeMzn\ perform almost
|
|
|
|
|
identically, and substantially better than baseline \gecodeStd\ for these
|
|
|
|
|
instances. The baseline learning solver \chuffedStd\ is best overall on the easy
|
|
|
|
|
examples, but \gls{lns} makes it much more robust. The poor performance of
|
|
|
|
|
\chuffedMzn\ on the last instance is due to the fixed search, which limits the
|
|
|
|
|
usefulness of nogood learning.
|
|
|
|
|
instances. The baseline learning solver, \chuffedStd{}, is the best overall on
|
|
|
|
|
the easy examples, but \gls{lns} makes it much more robust. The poor performance
|
|
|
|
|
of \chuffedMzn\ on the last instance is due to the fixed search, which limits
|
|
|
|
|
the usefulness of no-good learning.
|
|
|
|
|
|
|
|
|
|
\subsubsection{Summary}
|
|
|
|
|
The results show that \gls{lns} outperforms the baseline solvers, except for
|
|
|
|
@ -978,8 +1084,8 @@ However, the main result from these experiments is that the overhead introduced
|
|
|
|
|
by our \flatzinc\ interface, when compared to an optimal \gls{lns}
|
|
|
|
|
implementation, is relatively small. We have additionally calculated the rate of
|
|
|
|
|
search nodes explored per second and, across all experiments, \gecodeMzn\
|
|
|
|
|
achieves around 3\% fewer nodes per second than \gecodeReplay. This overhead is
|
|
|
|
|
caused by propagating the additional constraints in \gecodeMzn. Overall, the
|
|
|
|
|
achieves around 3\% fewer nodes per second than \gecodeReplay{}. This overhead is
|
|
|
|
|
caused by propagating the additional constraints in \gecodeMzn{}. Overall, the
|
|
|
|
|
experiments demonstrate that the compilation approach is an effective and
|
|
|
|
|
efficient way of adding \gls{lns} to a modelling language with minimal changes
|
|
|
|
|
to the solver.
|
|
|
|
|