|
|
|
@ -28,10 +28,13 @@ modifications, thousands of times. Examples of these methods are:
|
|
|
|
|
other in order to give human decision makers an overview of the solution
|
|
|
|
|
space. Diversity can be achieved by repeatedly solving a problem
|
|
|
|
|
instance with different objectives.
|
|
|
|
|
% \item In Interactive Search \autocite{}, a user provides feedback on decisions
|
|
|
|
|
% made by the solver. The feedback is added back into the problem, and a
|
|
|
|
|
% new solution is generated. Users may also take back some earlier
|
|
|
|
|
% feedback and explore different aspects of the problem.
|
|
|
|
|
\item Interactive Optimisation \autocite{belin-2014-interactive}. In some
|
|
|
|
|
scenarios it might be useful to allow a user to directly provide
|
|
|
|
|
feedback on solutions found by the solver. The feedback in the form of
|
|
|
|
|
constraint are added back into the problem, and a new solution is
|
|
|
|
|
generated. Users may also take back some earlier feedback and explore
|
|
|
|
|
different aspects of the problem to arrive at the best solution that
|
|
|
|
|
suits their needs.
|
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
All of these examples have in common that a problem instance is solved, new
|
|
|
|
@ -52,40 +55,53 @@ still prove prohibitive, warranting direct support from the
|
|
|
|
|
methods to provide this support:
|
|
|
|
|
|
|
|
|
|
\begin{itemize}
|
|
|
|
|
\item Using a minimal extension of existing solvers, we can compile
|
|
|
|
|
\gls{meta-search} algorithms into efficient solver-level specifications
|
|
|
|
|
based on solver restart, avoiding re-compilation all-together.
|
|
|
|
|
\item We can add an interface for adding and removing constraints in the
|
|
|
|
|
\gls{constraint-modelling} infrastructure and avoid recompilation where
|
|
|
|
|
\gls{constraint-modelling} infrastructure and avoid re-compilation where
|
|
|
|
|
possible.
|
|
|
|
|
\item With a slight extension of existing solvers, we can compile
|
|
|
|
|
\gls{meta-search} algorithms into efficient solver-level specifications,
|
|
|
|
|
avoiding recompilation all-together.
|
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
The rest of the chapter is organised as follows. \Cref{sec:6-minisearch}
|
|
|
|
|
discusses \minisearch\ as a basis for extending \cmls\ with \gls{meta-search}
|
|
|
|
|
capabilities. \Cref{sec:6-modelling} discusses how to extend a \cml\ to model
|
|
|
|
|
the changes to be made by a \gls{meta-search} algorithm.
|
|
|
|
|
\Cref{sec:6-incremental-compilation} introduces the method that extends the
|
|
|
|
|
\gls{constraint-modelling} infrastructure with an interface to add and remove
|
|
|
|
|
constraints from an existing model while avoiding recompilation.
|
|
|
|
|
\Cref{sec:6-solver-extension} introduces the method can compile some
|
|
|
|
|
\gls{meta-search} algorithms into efficient solver-level specifications that
|
|
|
|
|
Although it might sound like the first option is always the best one, it should
|
|
|
|
|
be noted that this option cannot always be used. It might not be possible to
|
|
|
|
|
extend the target \gls{solver} (or it might not be allowed in case of some
|
|
|
|
|
proprietary \glspl{solver}). Furthermore, the modelling of \gls{meta-search}
|
|
|
|
|
algorithms using solver restarts is limited \textbf{TODO: in some way}.
|
|
|
|
|
|
|
|
|
|
The rest of the chapter is organised as follows. \Cref{sec:6-modelling}
|
|
|
|
|
discusses the declarative modelling of \gls{meta-search} algorithms using \cmls.
|
|
|
|
|
\Cref{sec:6-solver-extension} introduces the method to compile these
|
|
|
|
|
\gls{meta-search} specifications into efficient solver-level specifications that
|
|
|
|
|
only require a small extension of existing \glspl{solver}.
|
|
|
|
|
\Cref{sec:6-incremental-compilation} introduces the alternative method that
|
|
|
|
|
extends the \gls{constraint-modelling} infrastructure with an interface to add
|
|
|
|
|
and remove constraints from an existing model while avoiding recompilation.
|
|
|
|
|
\Cref{sec:6-experiments} reports on the experimental results of both approaches.
|
|
|
|
|
Finally, \Cref{sec:6-conclusion} presents the conclusions.
|
|
|
|
|
|
|
|
|
|
\section{Meta-Search in \glsentrytext{minisearch}}
|
|
|
|
|
|
|
|
|
|
\section{Modelling of Meta-Search}
|
|
|
|
|
\label{sec:6-modelling}
|
|
|
|
|
|
|
|
|
|
This section introduces a \minizinc\ extension that enables modellers to define
|
|
|
|
|
\gls{meta-search} algorithms in \cmls. This extension is based on the construct
|
|
|
|
|
introduced in \minisearch\ \autocite{rendl-2015-minisearch}, as summarised
|
|
|
|
|
below.
|
|
|
|
|
|
|
|
|
|
\subsection{Meta-Search in \glsentrytext{minisearch}}
|
|
|
|
|
\label{sec:6-minisearch}
|
|
|
|
|
|
|
|
|
|
% Most LNS literature discusses neighbourhoods in terms of ``destroying'' part of
|
|
|
|
|
% Most \gls{lns} literature discusses neighbourhoods in terms of ``destroying'' part of
|
|
|
|
|
% a solution that is later repaired. However, from a declarative modelling point
|
|
|
|
|
% of view, it is more natural to see neighbourhoods as adding new constraints and
|
|
|
|
|
% variables that need to be applied to the base model, \eg\ forcing variables to
|
|
|
|
|
% take the same value as in the previous solution.
|
|
|
|
|
|
|
|
|
|
\minisearch\ \autocite{rendl-2015-minisearch} introduced a \minizinc\ extension
|
|
|
|
|
that enables modellers to express meta-searches inside a \minizinc\ model. A
|
|
|
|
|
meta-search in \minisearch\ typically solves a given \minizinc\ model, performs
|
|
|
|
|
some calculations on the solution, adds new constraints and then solves again.
|
|
|
|
|
\minisearch\ introduced a \minizinc\ extension that enables modellers to express
|
|
|
|
|
meta-searches inside a \minizinc\ model. A meta-search in \minisearch\ typically
|
|
|
|
|
solves a given \minizinc\ model, performs some calculations on the solution,
|
|
|
|
|
adds new constraints and then solves again.
|
|
|
|
|
|
|
|
|
|
Most \gls{meta-search} definitions in \minisearch\ consist of two parts. The
|
|
|
|
|
first part is a declarative definition of any restriction to the search space
|
|
|
|
@ -94,8 +110,8 @@ In \minisearch\ these definitions can make use of the function:
|
|
|
|
|
\mzninline{function int: sol(var int: x)}, which returns the value that variable
|
|
|
|
|
\mzninline{x} was assigned to in the previous solution (similar functions are
|
|
|
|
|
defined for Boolean, float and set variables). This allows the
|
|
|
|
|
\gls{neigbourhood} to be defined in terms of the previous solution. In addition,
|
|
|
|
|
a neighbourhood predicate will typically make use of the random number
|
|
|
|
|
\gls{neighbourhood} to be defined in terms of the previous solution. In
|
|
|
|
|
addition, a neighbourhood predicate will typically make use of the random number
|
|
|
|
|
generators available in the \minizinc\ standard library.
|
|
|
|
|
\Cref{lst:6-lns-minisearch-pred} shows a simple random neighbourhood. For each
|
|
|
|
|
decision variable \mzninline{x[i]}, it draws a random number from a uniform
|
|
|
|
@ -134,19 +150,20 @@ solution.
|
|
|
|
|
Although \minisearch\ enables the modeller to express \glspl{neighbourhood} in a
|
|
|
|
|
declarative way, the definition of the \gls{meta-search} algorithms is rather
|
|
|
|
|
unintuitive and difficult to debug, leading to unwieldy code for defining even
|
|
|
|
|
simple restarting strategies.
|
|
|
|
|
simple restarting strategies. Furthermore, the \minisearch\ implementation
|
|
|
|
|
requires either a close integration of the backend solver into the \minisearch\
|
|
|
|
|
system, or it drives the solver through the regular text-file based \flatzinc\
|
|
|
|
|
interface, leading to a significant communication overhead.
|
|
|
|
|
|
|
|
|
|
\textbf{TODO:} Furthermore, the \minisearch\ implementation requires either a
|
|
|
|
|
close integration of the backend solver into the \minisearch\ system, or it
|
|
|
|
|
drives the solver through the regular text-file based \flatzinc\ interface,
|
|
|
|
|
leading to a significant communication overhead.
|
|
|
|
|
To address these two issues, we propose to keep modelling neighbourhoods as
|
|
|
|
|
predicates, but define \gls{meta-search} algorithms from an imperative
|
|
|
|
|
perspective.
|
|
|
|
|
|
|
|
|
|
% To address these two issues, we propose to keep modelling neighbourhoods as
|
|
|
|
|
% predicates, but define a small number of additional \minizinc\ built-in
|
|
|
|
|
% annotations and functions that (a) allow us to express important aspects of the
|
|
|
|
|
% meta-search in a more convenient way, and (b) enable a simple compilation scheme
|
|
|
|
|
% that requires no additional communication with and only small, simple extensions
|
|
|
|
|
% of the backend solver.
|
|
|
|
|
define a small number of additional \minizinc\ built-in annotations and
|
|
|
|
|
functions that (a) allow us to express important aspects of the meta-search in a
|
|
|
|
|
more convenient way, and (b) enable a simple compilation scheme that requires no
|
|
|
|
|
additional communication with and only small, simple extensions of the backend
|
|
|
|
|
solver.
|
|
|
|
|
|
|
|
|
|
% The approach we follow here is therefore to \textbf{extend \flatzinc}, such that
|
|
|
|
|
% the definition of neighbourhoods can be communicated to the solver together with
|
|
|
|
@ -154,36 +171,34 @@ leading to a significant communication overhead.
|
|
|
|
|
% solver, while avoiding the costly communication and cold-starting of the
|
|
|
|
|
% black-box approach.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\section{Modelling of Meta-Search}
|
|
|
|
|
\label{sec:6-modelling}
|
|
|
|
|
\subsection{Restart Annotation}
|
|
|
|
|
|
|
|
|
|
Instead of the complex \minisearch\ definitions, we propose to add support for
|
|
|
|
|
simple meta-searches that are purely based on the notion of \emph{restarts}. A
|
|
|
|
|
restart happens when a solver abandons its current search efforts, returns to
|
|
|
|
|
the root node of the search tree, and begins a new exploration. Many \gls{cp}
|
|
|
|
|
solvers already provide support for controlling their restarting behaviour,
|
|
|
|
|
e.g.\ they can periodically restart after a certain number of nodes, or restart
|
|
|
|
|
for every solution. Typically, solvers also support posting additional
|
|
|
|
|
constraints upon restarting (\eg\ Comet \autocite{michel-2005-comet}) that are
|
|
|
|
|
only valid for the particular restart (\ie\ they are ``retracted'' for the next
|
|
|
|
|
restart).
|
|
|
|
|
\glspl{meta-search} that are purely based on the notion of \glspl{restart}. A
|
|
|
|
|
\gls{restart} happens when a solver abandons its current search efforts, returns
|
|
|
|
|
to the root node of the search tree, and begins a new exploration. Many \gls{cp}
|
|
|
|
|
solvers already provide support for controlling their restarting behaviour, \eg\
|
|
|
|
|
they can periodically restart after a certain number of nodes, or restart for
|
|
|
|
|
every solution. Typically, solvers also support posting additional constraints
|
|
|
|
|
upon restarting (\eg\ Comet \autocite{michel-2005-comet}) that are only valid
|
|
|
|
|
for the particular \gls{restart} (\ie\ they are ``retracted'' for the next
|
|
|
|
|
\gls{restart}).
|
|
|
|
|
|
|
|
|
|
In its simplest form, we can therefore implement LNS by specifying a
|
|
|
|
|
In its simplest form, we can therefore implement \gls{lns} by specifying a
|
|
|
|
|
neighbourhood predicate, and annotating the \mzninline{solve} item to indicate
|
|
|
|
|
the predicate should be invoked upon each restart:
|
|
|
|
|
|
|
|
|
|
\mzninline{solve ::on_restart(myNeighbourhood) minimize cost;}
|
|
|
|
|
\mzninline{solve ::on_restart(my_neighbourhood) minimize cost;}
|
|
|
|
|
|
|
|
|
|
Note that \minizinc\ currently does not support passing functions or predicates
|
|
|
|
|
as arguments. Calling the predicate, as in
|
|
|
|
|
\mzninline{::on_restart(myNeighbourhood())}, would not have the correct
|
|
|
|
|
\mzninline{::on_restart(my_neighbourhood())}, would not have the correct
|
|
|
|
|
semantics, since the predicate needs to be called for \emph{each} restart. As a
|
|
|
|
|
workaround, we currently pass the name of the predicate to be called for each
|
|
|
|
|
restart as a string (see the definition of the new \mzninline{on_restart}
|
|
|
|
|
annotation in \cref{lst:6-restart-ann}).
|
|
|
|
|
|
|
|
|
|
The second component of our LNS definition is the \emph{restarting strategy},
|
|
|
|
|
The second component of our \gls{lns} definition is the \emph{restarting strategy},
|
|
|
|
|
defining how much effort the solver should put into each neighbourhood (\ie\
|
|
|
|
|
restart), and when to stop the overall search.
|
|
|
|
|
|
|
|
|
@ -203,23 +218,26 @@ search after a fixed number of restarts.
|
|
|
|
|
behaviour}
|
|
|
|
|
\end{listing}
|
|
|
|
|
|
|
|
|
|
\subsection{Neighbourhood selection}
|
|
|
|
|
\subsection{Advanced Meta-Search}
|
|
|
|
|
|
|
|
|
|
It is often beneficial to use several neighbourhood definitions for a problem.
|
|
|
|
|
Different neighbourhoods may be able to improve different aspects of a solution,
|
|
|
|
|
at different phases of the search. Adaptive LNS \autocite{ropke-2006-adaptive,
|
|
|
|
|
pisinger-2007-heuristic}, which keeps track of the neighbourhoods that led to
|
|
|
|
|
improvements and favours them for future iterations, is the prime example for
|
|
|
|
|
this approach. A simpler scheme may apply several neighbourhoods in a
|
|
|
|
|
round-robin fashion.
|
|
|
|
|
Although using just a restart annotations by themselves allows us to run the
|
|
|
|
|
basic \gls{lns} algorithm, more advanced \gls{meta-search} algorithms will
|
|
|
|
|
require more then just reapplying the same \gls{neighbourhood} time after time.
|
|
|
|
|
It is, for example, often beneficial to use several \gls{neighbourhood}
|
|
|
|
|
definitions for a problem. Different \glspl{neighbourhood} may be able to
|
|
|
|
|
improve different aspects of a solution, at different phases of the search.
|
|
|
|
|
Adaptive \gls{lns} \autocite{ropke-2006-adaptive, pisinger-2007-heuristic},
|
|
|
|
|
which keeps track of the \glspl{neighbourhood} that led to improvements and
|
|
|
|
|
favours them for future iterations, is the prime example for this approach. A
|
|
|
|
|
simpler scheme may apply several \glspl{neighbourhood} in a round-robin fashion.
|
|
|
|
|
|
|
|
|
|
In \minisearch\, adaptive or round-robin approaches can be implemented using
|
|
|
|
|
\emph{state variables}, which support destructive update (overwriting the value
|
|
|
|
|
they store). In this way, the \minisearch\ strategy can store values to be used
|
|
|
|
|
in later iterations. We use the \emph{solver state} instead, \ie\ normal
|
|
|
|
|
In \minisearch\, these adaptive or round-robin approaches can be implemented
|
|
|
|
|
using \emph{state variables}, which support destructive update (overwriting the
|
|
|
|
|
value they store). In this way, the \minisearch\ strategy can store values to be
|
|
|
|
|
used in later iterations. We use the \emph{solver state} instead, \ie\ normal
|
|
|
|
|
decision variables, and define two simple built-in functions to access the
|
|
|
|
|
solver state \emph{of the previous restart}. This approach is sufficient for
|
|
|
|
|
expressing neighbourhood selection strategies, and its implementation is much
|
|
|
|
|
expressing many \gls{meta-search} algorithms, and its implementation is much
|
|
|
|
|
simpler.
|
|
|
|
|
|
|
|
|
|
\paragraph{State access and initialisation}
|
|
|
|
@ -229,7 +247,7 @@ The state access functions are defined in \cref{lst:6-state-access}. Function
|
|
|
|
|
\mzninline{START} (there has been no restart yet); \mzninline{UNSAT} (the
|
|
|
|
|
restart failed); \mzninline{SAT} (the restart found a solution); \mzninline{OPT}
|
|
|
|
|
(the restart found and proved an optimal solution); and \mzninline{UNKNOWN} (the
|
|
|
|
|
restart did not fail or find a solution). Function \mzninline{lastval} (which,
|
|
|
|
|
restart did not fail or find a solution). Function \mzninline{last_val} (which,
|
|
|
|
|
like \mzninline{sol}, has versions for all basic variable types) allows
|
|
|
|
|
modellers to access the last value assigned to a variable (the value is
|
|
|
|
|
undefined if \mzninline{status()=START}).
|
|
|
|
@ -263,22 +281,22 @@ all, calling \mzninline{uniformNeighbourhood} like that would result in a
|
|
|
|
|
call-by-value evaluation strategy. Furthermore, the \mzninline{on_restart}
|
|
|
|
|
annotation only accepts the name of a nullary predicate. Therefore, users have
|
|
|
|
|
to define their overall strategy in a new predicate. \Cref{lst:6-basic-complete}
|
|
|
|
|
shows a complete example of a basic LNS model.
|
|
|
|
|
shows a complete example of a basic \gls{lns} model.
|
|
|
|
|
|
|
|
|
|
\begin{listing}[t]
|
|
|
|
|
\highlightfile{assets/mzn/6_basic_complete.mzn}
|
|
|
|
|
\caption{\label{lst:6-basic-complete} Complete LNS example}
|
|
|
|
|
\caption{\label{lst:6-basic-complete} Complete \gls{lns} example}
|
|
|
|
|
\end{listing}
|
|
|
|
|
|
|
|
|
|
We can also define round-robin and adaptive strategies using these primitives.
|
|
|
|
|
%\paragraph{Round-robin LNS}
|
|
|
|
|
\Cref{lst:6-round-robin} defines a round-robin LNS meta-heuristic, which cycles
|
|
|
|
|
%\paragraph{Round-robin \gls{lns}}
|
|
|
|
|
\Cref{lst:6-round-robin} defines a round-robin \gls{lns} meta-heuristic, which cycles
|
|
|
|
|
through a list of \mzninline{N} neighbourhoods \mzninline{nbhs}. To do this, it
|
|
|
|
|
uses the decision variable \mzninline{select}. In the initialisation phase
|
|
|
|
|
(\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which
|
|
|
|
|
means none of the neighbourhoods is activated. In any following restart,
|
|
|
|
|
\mzninline{select} is incremented modulo \mzninline{N}, by accessing the last
|
|
|
|
|
value assigned in a previous restart (\mzninline{lastval(select)}). This will
|
|
|
|
|
value assigned in a previous restart (\mzninline{last_val(select)}). This will
|
|
|
|
|
activate a different neighbourhood for each restart
|
|
|
|
|
(\lref{line:6:roundrobin:post}).
|
|
|
|
|
|
|
|
|
@ -288,8 +306,8 @@ activate a different neighbourhood for each restart
|
|
|
|
|
meta-heuristic}
|
|
|
|
|
\end{listing}
|
|
|
|
|
|
|
|
|
|
%\paragraph{Adaptive LNS}
|
|
|
|
|
For adaptive LNS, a simple strategy is to change the size of the neighbourhood
|
|
|
|
|
%\paragraph{Adaptive \gls{lns}}
|
|
|
|
|
For adaptive \gls{lns}, a simple strategy is to change the size of the neighbourhood
|
|
|
|
|
depending on whether the previous size was successful or not.
|
|
|
|
|
\Cref{lst:6-adaptive} shows an adaptive version of the
|
|
|
|
|
\mzninline{uniformNeighbourhood} that increases the number of free variables
|
|
|
|
@ -303,12 +321,12 @@ bounds $[0.6,0.95]$.
|
|
|
|
|
|
|
|
|
|
\subsection{Meta-heuristics}
|
|
|
|
|
|
|
|
|
|
The LNS strategies we have seen so far rely on the default behaviour of
|
|
|
|
|
The \gls{lns} strategies we have seen so far rely on the default behaviour of
|
|
|
|
|
\minizinc\ solvers to use branch-and-bound for optimisation: when a new solution
|
|
|
|
|
is found, the solver adds a constraint to the remainder of the search to only
|
|
|
|
|
accept better solutions, as defined by the objective function in the
|
|
|
|
|
\mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve}
|
|
|
|
|
item. When combined with restarts and LNS, this is equivalent to a simple
|
|
|
|
|
item. When combined with restarts and \gls{lns}, this is equivalent to a simple
|
|
|
|
|
hill-climbing meta-heuristic.
|
|
|
|
|
|
|
|
|
|
We can use the constructs introduced above to implement alternative
|
|
|
|
@ -336,109 +354,18 @@ express:
|
|
|
|
|
|
|
|
|
|
\highlightfile{assets/mzn/6_simulated_annealing.mzn}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\section{An Incremental Interface for Constraint Modelling Languages}
|
|
|
|
|
\label{sec:6-incremental-compilation}
|
|
|
|
|
|
|
|
|
|
In order to support incremental flattening, the \nanozinc\ interpreter must be
|
|
|
|
|
able to process \nanozinc\ calls \emph{added} to an existing \nanozinc\ program,
|
|
|
|
|
as well as to \emph{remove} calls from an existing \nanozinc\ program. Adding new
|
|
|
|
|
calls is straightforward, since \nanozinc\ is already processed call-by-call.
|
|
|
|
|
|
|
|
|
|
Removing a call, however, is not so simple. When we remove a call, all effects
|
|
|
|
|
the call had on the \nanozinc\ program have to be undone, including results of
|
|
|
|
|
propagation, CSE and other simplifications.
|
|
|
|
|
|
|
|
|
|
\begin{example}\label{ex:6-incremental}
|
|
|
|
|
Consider the following \minizinc\ fragment:
|
|
|
|
|
|
|
|
|
|
\highlightfile{assets/mzn/6_incremental.mzn}
|
|
|
|
|
|
|
|
|
|
After evaluating the first constraint, the domain of \mzninline{x} is changed to
|
|
|
|
|
be less than 10. Evaluating the second constraint causes the domain of
|
|
|
|
|
\mzninline{y} to be less than 9. If we now, however, try to remove the first
|
|
|
|
|
constraint, it is not just the direct inference on the domain of \mzninline{x}
|
|
|
|
|
that has to be undone, but also any further effects of those changes -- in this
|
|
|
|
|
case, the changes to the domain of \mzninline{y}.
|
|
|
|
|
\end{example}
|
|
|
|
|
|
|
|
|
|
Due to this complex interaction between calls, we only support the removal of
|
|
|
|
|
calls in reverse chronological order, also known as \textit{backtracking}. The
|
|
|
|
|
common way of implementing backtracking is using a \textit{trail} data
|
|
|
|
|
structure~\autocite{warren-1983-wam}. The trail records all changes to the
|
|
|
|
|
\nanozinc\ program:
|
|
|
|
|
|
|
|
|
|
\begin{itemize}
|
|
|
|
|
\item the addition or removal of new variables or constraints,
|
|
|
|
|
\item changes made to the domains of variables,
|
|
|
|
|
\item additions to the CSE table, and
|
|
|
|
|
\item substitutions made due to equality propagation.
|
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
These changes can be caused by the evaluation of a call, propagation, or CSE.
|
|
|
|
|
When a call is removed, the corresponding changes can now be undone by
|
|
|
|
|
reversing any action recorded on the trail up to the point where the call was
|
|
|
|
|
added.
|
|
|
|
|
|
|
|
|
|
In order to limit the amount of trailing required, the programmer must create
|
|
|
|
|
explicit \textit{choice points} to which the system state can be reset. In
|
|
|
|
|
particular, this means that if no choice point was created before the initial
|
|
|
|
|
model was flattened, then this flattening can be performed without any
|
|
|
|
|
trailing.
|
|
|
|
|
|
|
|
|
|
\begin{example}\label{ex:6-trail}
|
|
|
|
|
Let us look again at the resulting \nanozinc\ code from \Cref{ex:absreif}:
|
|
|
|
|
|
|
|
|
|
% \highlightfile{assets/mzn/6_abs_reif_result.mzn}
|
|
|
|
|
|
|
|
|
|
Assume that we added a choice point before posting the constraint
|
|
|
|
|
\mzninline{c}. Then the trail stores the \emph{inverse} of all modifications
|
|
|
|
|
that were made to the \nanozinc\ as a result of \mzninline{c} (where
|
|
|
|
|
$\mapsfrom$ denotes restoring an identifier, and $\lhd$ \texttt{+}/\texttt{-}
|
|
|
|
|
respectively denote attaching and detaching constraints):
|
|
|
|
|
|
|
|
|
|
% \highlightfile{assets/mzn/6_abs_reif_trail.mzn}
|
|
|
|
|
|
|
|
|
|
To reconstruct the \nanozinc\ program at the choice point, we simply apply
|
|
|
|
|
the changes recorded in the trail, in reverse order.
|
|
|
|
|
\end{example}
|
|
|
|
|
|
|
|
|
|
\subsection{Incremental Solving}
|
|
|
|
|
|
|
|
|
|
Ideally, the incremental changes made by the interpreter would also be applied
|
|
|
|
|
incrementally to the solver. This requires the solver to support both the
|
|
|
|
|
dynamic addition and removal of variables and constraints. While some solvers
|
|
|
|
|
can support this functionality, most solvers have limitations. The system can
|
|
|
|
|
therefore support solvers with different levels of an incremental interface:
|
|
|
|
|
|
|
|
|
|
\begin{itemize}
|
|
|
|
|
\item Using a non-incremental interface, the solver is reinitialised with the
|
|
|
|
|
updated \nanozinc\ program every time. In this case, we still get a
|
|
|
|
|
performance benefit from the improved flattening time, but not from
|
|
|
|
|
incremental solving.
|
|
|
|
|
\item Using a \textit{warm-starting} interface, the solver is reinitialised
|
|
|
|
|
with the updated program as above, but it is also given a previous solution
|
|
|
|
|
to initialise some internal data structures. In particular for mathematical
|
|
|
|
|
programming solvers, this can result in dramatic performance gains compared
|
|
|
|
|
to ``cold-starting'' the solver every time.
|
|
|
|
|
\item Using a fully incremental interface, the solver is instructed to apply
|
|
|
|
|
the changes made by the interpreter. In this case, the trail data structure
|
|
|
|
|
is used to compute the set of \nanozinc\ changes since the last choice
|
|
|
|
|
point.
|
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
\section{Solver Executable Meta-Search}
|
|
|
|
|
\section{Compilation of Meta-Search}
|
|
|
|
|
\label{sec:6-solver-extension}
|
|
|
|
|
|
|
|
|
|
The neighbourhoods defined in the previous section can be executed with
|
|
|
|
|
\minisearch\ by adding support for the \mzninline{status} and
|
|
|
|
|
\mzninline{lastval} built-in functions, and by defining the main restart loop.
|
|
|
|
|
\mzninline{last_val} built-in functions, and by defining the main restart loop.
|
|
|
|
|
The \minisearch{} evaluator will then call a solver to produce a solution, and
|
|
|
|
|
evaluate the neighbourhood predicate, incrementally producing new \flatzinc\ to
|
|
|
|
|
be added to the next round of solving.
|
|
|
|
|
|
|
|
|
|
While this is a viable approach, our goal is to keep the compiler and solver
|
|
|
|
|
separate, by embedding the entire LNS specification into the \flatzinc\ that is
|
|
|
|
|
separate, by embedding the entire \gls{lns} specification into the \flatzinc\ that is
|
|
|
|
|
passed to the solver.
|
|
|
|
|
|
|
|
|
|
This section introduces such a compilation approach. It only requires simple
|
|
|
|
@ -465,7 +392,7 @@ evaluation is performed by hijacking the solver's own capabilities: It will
|
|
|
|
|
automatically perform the evaluation of the new functions by propagating the new
|
|
|
|
|
constraints.
|
|
|
|
|
|
|
|
|
|
To compile an LNS specification to standard \flatzinc, the \minizinc\ compiler
|
|
|
|
|
To compile an \gls{lns} specification to standard \flatzinc, the \minizinc\ compiler
|
|
|
|
|
performs four simple steps:
|
|
|
|
|
|
|
|
|
|
\begin{enumerate}
|
|
|
|
@ -473,7 +400,7 @@ performs four simple steps:
|
|
|
|
|
predicate \mzninline{X}.
|
|
|
|
|
\item Inside predicate \mzninline{X} and any other predicate called
|
|
|
|
|
recursively from \mzninline{X}: treat any call to built-in functions
|
|
|
|
|
\mzninline{sol}, \mzninline{status}, and \mzninline{lastval} as
|
|
|
|
|
\mzninline{sol}, \mzninline{status}, and \mzninline{last_val} as
|
|
|
|
|
returning a \mzninline{var} instead of a \mzninline{par} value; and
|
|
|
|
|
rename calls to random functions, e.g., \mzninline{uniform} to
|
|
|
|
|
\mzninline{uniform_nbh}, in order to distinguish them from their
|
|
|
|
@ -512,7 +439,7 @@ in \lref{line:6:status}), which constrains its local variable argument
|
|
|
|
|
\end{listing}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\paragraph{\mzninline{sol} and \mzninline{lastval}}
|
|
|
|
|
\paragraph{\mzninline{sol} and \mzninline{last_val}}
|
|
|
|
|
|
|
|
|
|
Since \mzninline{sol} is overloaded for different variable types and \flatzinc\
|
|
|
|
|
does not support overloading, we produce type-specific built-ins for every type
|
|
|
|
@ -534,7 +461,7 @@ To improve the compilation of the model further, we use the declared bounds of
|
|
|
|
|
the argument (\mzninline{lb(x)..ub(x)}) to constrain the variable returned by
|
|
|
|
|
\mzninline{sol}. This bounds information is important for the compiler to be
|
|
|
|
|
able to generate the most efficient \flatzinc\ code for expressions involving
|
|
|
|
|
\mzninline{sol}. The compilation of \mzninline{lastval} is similar to that for
|
|
|
|
|
\mzninline{sol}. The compilation of \mzninline{last_val} is similar to that for
|
|
|
|
|
\mzninline{sol}.
|
|
|
|
|
|
|
|
|
|
\paragraph{Random number functions}
|
|
|
|
@ -542,13 +469,12 @@ able to generate the most efficient \flatzinc\ code for expressions involving
|
|
|
|
|
Calls to the random number functions have been renamed by appending
|
|
|
|
|
\texttt{\_nbh}, so that the compiler does not simply evaluate them statically.
|
|
|
|
|
The definition of these new functions follows the same pattern as for
|
|
|
|
|
\mzninline{sol}, \mzninline{status}, and \mzninline{lastval}. The MiniZinc
|
|
|
|
|
\mzninline{sol}, \mzninline{status}, and \mzninline{last_val}. The MiniZinc
|
|
|
|
|
definition of the \mzninline{uniform_nbh} function is shown in
|
|
|
|
|
\Cref{lst:6-int-rnd}.%
|
|
|
|
|
\footnote{Random number functions need to be marked as \mzninline{::impure} for
|
|
|
|
|
the compiler not to apply Common Subexpression Elimination
|
|
|
|
|
(CSE)~\autocite{stuckey-2013-functions} if they are called multiple times with
|
|
|
|
|
the same arguments.}%
|
|
|
|
|
the compiler not to apply \gls{cse} \autocite{stuckey-2013-functions} if they
|
|
|
|
|
are called multiple times with the same arguments.}%
|
|
|
|
|
Note that the function accepts variable arguments \mzninline{l} and
|
|
|
|
|
\mzninline{u}, so that it can be used in combination with other functions, such
|
|
|
|
|
as \mzninline{sol}.
|
|
|
|
@ -559,16 +485,16 @@ as \mzninline{sol}.
|
|
|
|
|
\mzninline{uniform_nbh} function for floats}
|
|
|
|
|
\end{listing}
|
|
|
|
|
|
|
|
|
|
\subsection{Solver support for LNS \glsentrytext{flatzinc}}
|
|
|
|
|
\subsection{Solver support for \gls{lns} \glsentrytext{flatzinc}}
|
|
|
|
|
|
|
|
|
|
We will now show the minimal extensions required from a solver to interpret the
|
|
|
|
|
new \flatzinc\ constraints and, consequently, to execute LNS definitions
|
|
|
|
|
new \flatzinc\ constraints and, consequently, to execute \gls{lns} definitions
|
|
|
|
|
expressed in \minizinc.
|
|
|
|
|
|
|
|
|
|
First, the solver needs to parse and support the restart annotations
|
|
|
|
|
of~\cref{lst:6-restart-ann}. Many solvers already support all this
|
|
|
|
|
functionality. Second, the solver needs to be able to parse the new constraints
|
|
|
|
|
\mzninline{status}, and all versions of \mzninline{sol}, \mzninline{lastval},
|
|
|
|
|
\mzninline{status}, and all versions of \mzninline{sol}, \mzninline{last_val},
|
|
|
|
|
and random number functions like \mzninline{float_uniform}. In addition, for the
|
|
|
|
|
new constraints the solver needs to:
|
|
|
|
|
\begin{itemize}
|
|
|
|
@ -577,13 +503,13 @@ new constraints the solver needs to:
|
|
|
|
|
\item \mzninline{sol(x,sx)} (variants): constrain \mzninline{sx} to be equal
|
|
|
|
|
to the value of \mzninline{x} in the incumbent solution. If there is no
|
|
|
|
|
incumbent solution, it has no effect.
|
|
|
|
|
\item \mzninline{lastval(x,lx)} (variants): constrain \mzninline{lx} to take
|
|
|
|
|
\item \mzninline{last_val(x,lx)} (variants): constrain \mzninline{lx} to take
|
|
|
|
|
the last value assigned to \mzninline{x} during search. If no value was
|
|
|
|
|
ever assigned, it has no effect. Note that many solvers (in particular
|
|
|
|
|
SAT and LCG solvers) already track \mzninline{lastval} for their
|
|
|
|
|
variables for use in search. To support LNS a solver must at least track
|
|
|
|
|
the \emph{lastval} of each of the variables involved in such a
|
|
|
|
|
constraint. This is straightforward by using the \mzninline{lastval}
|
|
|
|
|
SAT and LCG solvers) already track \mzninline{last_val} for their
|
|
|
|
|
variables for use in search. To support \gls{lns} a solver must at least
|
|
|
|
|
track the \emph{last value} of each of the variables involved in such a
|
|
|
|
|
constraint. This is straightforward by using the \mzninline{last_val}
|
|
|
|
|
propagator itself. It wakes up whenever the first argument is fixed, and
|
|
|
|
|
updates the last value (a non-backtrackable value).
|
|
|
|
|
\item random number functions: fix their variable argument to a random number
|
|
|
|
@ -651,6 +577,96 @@ against being invoked before \mzninline{status()!=START}, since the
|
|
|
|
|
solution has been recorded yet, but we use this simple example to illustrate
|
|
|
|
|
how these Boolean conditions are compiled and evaluated.
|
|
|
|
|
|
|
|
|
|
\section{An Incremental Interface for Constraint Modelling Languages}
|
|
|
|
|
\label{sec:6-incremental-compilation}
|
|
|
|
|
|
|
|
|
|
In order to support incremental flattening, the \nanozinc\ interpreter must be
|
|
|
|
|
able to process \nanozinc\ calls \emph{added} to an existing \nanozinc\ program,
|
|
|
|
|
as well as to \emph{remove} calls from an existing \nanozinc\ program. Adding new
|
|
|
|
|
calls is straightforward, since \nanozinc\ is already processed call-by-call.
|
|
|
|
|
|
|
|
|
|
Removing a call, however, is not so simple. When we remove a call, all effects
|
|
|
|
|
the call had on the \nanozinc\ program have to be undone, including results of
|
|
|
|
|
propagation, \gls{cse} and other simplifications.
|
|
|
|
|
|
|
|
|
|
\begin{example}\label{ex:6-incremental}
|
|
|
|
|
Consider the following \minizinc\ fragment:
|
|
|
|
|
|
|
|
|
|
\highlightfile{assets/mzn/6_incremental.mzn}
|
|
|
|
|
|
|
|
|
|
After evaluating the first constraint, the domain of \mzninline{x} is changed to
|
|
|
|
|
be less than 10. Evaluating the second constraint causes the domain of
|
|
|
|
|
\mzninline{y} to be less than 9. If we now, however, try to remove the first
|
|
|
|
|
constraint, it is not just the direct inference on the domain of \mzninline{x}
|
|
|
|
|
that has to be undone, but also any further effects of those changes -- in this
|
|
|
|
|
case, the changes to the domain of \mzninline{y}.
|
|
|
|
|
\end{example}
|
|
|
|
|
|
|
|
|
|
Due to this complex interaction between calls, we only support the removal of
|
|
|
|
|
calls in reverse chronological order, also known as \textit{backtracking}. The
|
|
|
|
|
common way of implementing backtracking is using a \textit{trail} data
|
|
|
|
|
structure~\autocite{warren-1983-wam}. The trail records all changes to the
|
|
|
|
|
\nanozinc\ program:
|
|
|
|
|
|
|
|
|
|
\begin{itemize}
|
|
|
|
|
\item the addition or removal of new variables or constraints,
|
|
|
|
|
\item changes made to the domains of variables,
|
|
|
|
|
\item additions to the \gls{cse} table, and
|
|
|
|
|
\item substitutions made due to equality propagation.
|
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
These changes can be caused by the evaluation of a call, propagation, or \gls{cse}.
|
|
|
|
|
When a call is removed, the corresponding changes can now be undone by
|
|
|
|
|
reversing any action recorded on the trail up to the point where the call was
|
|
|
|
|
added.
|
|
|
|
|
|
|
|
|
|
In order to limit the amount of trailing required, the programmer must create
|
|
|
|
|
explicit \textit{choice points} to which the system state can be reset. In
|
|
|
|
|
particular, this means that if no choice point was created before the initial
|
|
|
|
|
model was flattened, then this flattening can be performed without any
|
|
|
|
|
trailing.
|
|
|
|
|
|
|
|
|
|
\begin{example}\label{ex:6-trail}
|
|
|
|
|
Let us look again at the resulting \nanozinc\ code from \Cref{ex:absreif}:
|
|
|
|
|
|
|
|
|
|
% \highlightfile{assets/mzn/6_abs_reif_result.mzn}
|
|
|
|
|
|
|
|
|
|
Assume that we added a choice point before posting the constraint
|
|
|
|
|
\mzninline{c}. Then the trail stores the \emph{inverse} of all modifications
|
|
|
|
|
that were made to the \nanozinc\ as a result of \mzninline{c} (where
|
|
|
|
|
$\mapsfrom$ denotes restoring an identifier, and $\lhd$ \texttt{+}/\texttt{-}
|
|
|
|
|
respectively denote attaching and detaching constraints):
|
|
|
|
|
|
|
|
|
|
% \highlightfile{assets/mzn/6_abs_reif_trail.mzn}
|
|
|
|
|
|
|
|
|
|
To reconstruct the \nanozinc\ program at the choice point, we simply apply
|
|
|
|
|
the changes recorded in the trail, in reverse order.
|
|
|
|
|
\end{example}
|
|
|
|
|
|
|
|
|
|
\subsection{Incremental Solving}
|
|
|
|
|
|
|
|
|
|
Ideally, the incremental changes made by the interpreter would also be applied
|
|
|
|
|
incrementally to the solver. This requires the solver to support both the
|
|
|
|
|
dynamic addition and removal of variables and constraints. While some solvers
|
|
|
|
|
can support this functionality, most solvers have limitations. The system can
|
|
|
|
|
therefore support solvers with different levels of an incremental interface:
|
|
|
|
|
|
|
|
|
|
\begin{itemize}
|
|
|
|
|
\item Using a non-incremental interface, the solver is reinitialised with the
|
|
|
|
|
updated \nanozinc\ program every time. In this case, we still get a
|
|
|
|
|
performance benefit from the improved flattening time, but not from
|
|
|
|
|
incremental solving.
|
|
|
|
|
\item Using a \textit{warm-starting} interface, the solver is reinitialised
|
|
|
|
|
with the updated program as above, but it is also given a previous solution
|
|
|
|
|
to initialise some internal data structures. In particular for mathematical
|
|
|
|
|
programming solvers, this can result in dramatic performance gains compared
|
|
|
|
|
to ``cold-starting'' the solver every time.
|
|
|
|
|
\item Using a fully incremental interface, the solver is instructed to apply
|
|
|
|
|
the changes made by the interpreter. In this case, the trail data structure
|
|
|
|
|
is used to compute the set of \nanozinc\ changes since the last choice
|
|
|
|
|
point.
|
|
|
|
|
\end{itemize}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\section{Experiments}
|
|
|
|
|
\label{sec:6-experiments}
|
|
|
|
@ -686,8 +702,8 @@ The Generalised Balanced Academic Curriculum (GBAC) problem
|
|
|
|
|
curriculum subject to load limits on the number of courses for each period,
|
|
|
|
|
prerequisites for courses, and preferences of teaching periods by teaching
|
|
|
|
|
staff. It has been shown~\autocite{dekker-2018-mzn-lns} that Large Neighbourhood
|
|
|
|
|
Search (LNS) is a useful meta-heuristic for quickly finding high quality
|
|
|
|
|
solutions to this problem. In LNS, once an initial (sub-optimal) solution is
|
|
|
|
|
Search (\gls{lns}) is a useful meta-heuristic for quickly finding high quality
|
|
|
|
|
solutions to this problem. In \gls{lns}, once an initial (sub-optimal) solution is
|
|
|
|
|
found, constraints are added to the problem that restrict the search space to a
|
|
|
|
|
\textit{neighbourhood} of the previous solution. After this neighbourhood has
|
|
|
|
|
been explored, the constraints are removed, and constraints for a different
|
|
|
|
@ -707,8 +723,8 @@ value in the previous solution. With the remaining $20\%$, the variable is
|
|
|
|
|
unconstrained and will be part of the search for a better solution.
|
|
|
|
|
|
|
|
|
|
In a non-incremental architecture, we would re-flatten the original model plus
|
|
|
|
|
the neighbourhood constraint for each iteration of the LNS. In the incremental
|
|
|
|
|
\nanozinc\ architecture, we can easily express LNS as a repeated addition and
|
|
|
|
|
the neighbourhood constraint for each iteration of the \gls{lns}. In the incremental
|
|
|
|
|
\nanozinc\ architecture, we can easily express \gls{lns} as a repeated addition and
|
|
|
|
|
retraction of the neighbourhood constraints. We implemented both approaches
|
|
|
|
|
using the \nanozinc\ prototype, with the results shown in \Cref{fig:6-gbac}. The
|
|
|
|
|
incremental \nanozinc\ translation shows a 12x speedup compared to re-compiling
|
|
|
|
@ -721,7 +737,7 @@ reduction in runtime.
|
|
|
|
|
\includegraphics[width=0.5\columnwidth]{assets/img/6_gbac}
|
|
|
|
|
\caption{\label{fig:6-gbac}A run-time performance comparison between incremental
|
|
|
|
|
processing (Incr.) and re-evaluation (Redo) of 5 GBAC \minizinc\ instances
|
|
|
|
|
in the application of LNS on a 3.4 GHz Quad-Core Intel Core i5 using the
|
|
|
|
|
in the application of \gls{lns} on a 3.4 GHz Quad-Core Intel Core i5 using the
|
|
|
|
|
Gecode 6.1.2 solver. Each run consisted of 2500 iterations of applying
|
|
|
|
|
neighbourhood predicates. Reported times are averages of 10 runs.}
|
|
|
|
|
\end{figure}
|
|
|
|
@ -776,7 +792,7 @@ spent solving is reduced by 33\%.
|
|
|
|
|
\newcommand{\chuffedStd}{\textsf{chuffed}}
|
|
|
|
|
\newcommand{\chuffedMzn}{\textsf{chuffed-fzn}}
|
|
|
|
|
|
|
|
|
|
We will now show that a solver that evaluates the compiled \flatzinc LNS
|
|
|
|
|
We will now show that a solver that evaluates the compiled \flatzinc \gls{lns}
|
|
|
|
|
specifications can (a) be effective and (b) incur only a small overhead compared
|
|
|
|
|
to a dedicated implementation of the neighbourhoods.
|
|
|
|
|
|
|
|
|
@ -785,21 +801,21 @@ Gecode~\autocite{gecode-2021-gecode}. The resulting solver (\gecodeMzn in the ta
|
|
|
|
|
below) has been instrumented to also output the domains of all model variables
|
|
|
|
|
after propagating the new special constraints. We implemented another extension
|
|
|
|
|
to Gecode (\gecodeReplay) that simply reads the stream of variable domains for
|
|
|
|
|
each restart, essentially replaying the LNS of \gecodeMzn without incurring any
|
|
|
|
|
each restart, essentially replaying the \gls{lns} of \gecodeMzn without incurring any
|
|
|
|
|
overhead for evaluating the neighbourhoods or handling the additional variables
|
|
|
|
|
and constraints. Note that this is a conservative estimate of the overhead:
|
|
|
|
|
\gecodeReplay has to perform \emph{less} work than any real LNS implementation.
|
|
|
|
|
\gecodeReplay has to perform \emph{less} work than any real \gls{lns} implementation.
|
|
|
|
|
|
|
|
|
|
In addition, we also present benchmark results for the standard release of
|
|
|
|
|
Gecode 6.0 without LNS (\gecodeStd); as well as \chuffedStd, the development
|
|
|
|
|
version of Chuffed; and \chuffedMzn, Chuffed performing LNS with FlatZinc
|
|
|
|
|
neighbourhoods. These experiments illustrate that the LNS implementations indeed
|
|
|
|
|
Gecode 6.0 without \gls{lns} (\gecodeStd); as well as \chuffedStd, the development
|
|
|
|
|
version of Chuffed; and \chuffedMzn, Chuffed performing \gls{lns} with FlatZinc
|
|
|
|
|
neighbourhoods. These experiments illustrate that the \gls{lns} implementations indeed
|
|
|
|
|
perform well compared to the standard solvers.\footnote{Our implementations are
|
|
|
|
|
available at
|
|
|
|
|
\texttt{\justify{}https://github.com/Dekker1/\{libminizinc,gecode,chuffed\}} on branches
|
|
|
|
|
containing the keyword \texttt{on\_restart}.} All experiments were run on a
|
|
|
|
|
single core of an Intel Core i5 CPU @ 3.4 GHz with 4 cores and 16 GB RAM running
|
|
|
|
|
MacOS High Sierra. LNS benchmarks are repeated with 10 different random seeds
|
|
|
|
|
MacOS High Sierra. \gls{lns} benchmarks are repeated with 10 different random seeds
|
|
|
|
|
and the average is shown. The overall timeout for each run is 120 seconds.
|
|
|
|
|
|
|
|
|
|
We ran experiments for three models from the MiniZinc
|
|
|
|
@ -811,7 +827,7 @@ For each solving method we measured the average integral of the model objective
|
|
|
|
|
after finding the initial solution ($\intobj$), the average best objective found
|
|
|
|
|
($\minobj$), and the standard deviation of the best objective found in
|
|
|
|
|
percentage (\%), which is shown as the superscript on $\minobj$ when running
|
|
|
|
|
LNS.
|
|
|
|
|
\gls{lns}.
|
|
|
|
|
%and the average number of nodes per one second (\nodesec).
|
|
|
|
|
The underlying search strategy used is the fixed search strategy defined in the
|
|
|
|
|
model. For each model we use a round robin evaluation (\cref{lst:6-round-robin})
|
|
|
|
@ -848,7 +864,7 @@ The results for \texttt{gbac} in \cref{tab:6-gbac} show that the overhead
|
|
|
|
|
introduced by \gecodeMzn w.r.t.~\gecodeReplay is quite low, and both their
|
|
|
|
|
results are much better than the baseline \gecodeStd. Since learning is not very
|
|
|
|
|
effective for \texttt{gbac}, the performance of \chuffedStd is inferior to
|
|
|
|
|
Gecode. However, LNS again significantly improves over standard Chuffed.
|
|
|
|
|
Gecode. However, \gls{lns} again significantly improves over standard Chuffed.
|
|
|
|
|
|
|
|
|
|
\subsubsection{\texttt{steelmillslab}}
|
|
|
|
|
|
|
|
|
@ -874,11 +890,11 @@ slab.
|
|
|
|
|
\caption{\label{tab:6-steelmillslab}\texttt{steelmillslab} benchmarks}
|
|
|
|
|
\end{table}
|
|
|
|
|
|
|
|
|
|
For this problem a solution with zero wastage is always optimal. The use of LNS
|
|
|
|
|
makes these instances easy, as all the LNS approaches find optimal solutions. As
|
|
|
|
|
For this problem a solution with zero wastage is always optimal. The use of \gls{lns}
|
|
|
|
|
makes these instances easy, as all the \gls{lns} approaches find optimal solutions. As
|
|
|
|
|
\cref{tab:6-steelmillslab} shows, \gecodeMzn is again slightly slower than
|
|
|
|
|
\gecodeReplay (the integral is slightly larger). While \chuffedStd significantly
|
|
|
|
|
outperforms \gecodeStd on this problem, once we use LNS, the learning in
|
|
|
|
|
outperforms \gecodeStd on this problem, once we use \gls{lns}, the learning in
|
|
|
|
|
\chuffedMzn is not advantageous compared to \gecodeMzn or \gecodeReplay. Still,
|
|
|
|
|
\chuffedMzn outperforms \chuffedStd by always finding an optimal solution.
|
|
|
|
|
|
|
|
|
@ -910,22 +926,22 @@ that time interval, which allows a reshuffling of these tasks.
|
|
|
|
|
\cref{tab:6-rcpsp-wet} shows that \gecodeReplay and \gecodeMzn perform almost
|
|
|
|
|
identically, and substantially better than baseline \gecodeStd for these
|
|
|
|
|
instances. The baseline learning solver \chuffedStd is best overall on the easy
|
|
|
|
|
examples, but LNS makes it much more robust. The poor performance of \chuffedMzn
|
|
|
|
|
examples, but \gls{lns} makes it much more robust. The poor performance of \chuffedMzn
|
|
|
|
|
on the last instance is due to the fixed search, which limits the usefulness of
|
|
|
|
|
nogood learning.
|
|
|
|
|
|
|
|
|
|
\subsubsection{Summary}
|
|
|
|
|
The results show that LNS outperforms the baseline solvers, except for
|
|
|
|
|
The results show that \gls{lns} outperforms the baseline solvers, except for
|
|
|
|
|
benchmarks where we can quickly find and prove optimality.
|
|
|
|
|
|
|
|
|
|
However, the main result from these experiments is that the overhead introduced
|
|
|
|
|
by our \flatzinc interface, when compared to an optimal LNS implementation, is
|
|
|
|
|
by our \flatzinc interface, when compared to an optimal \gls{lns} implementation, is
|
|
|
|
|
relatively small. We have additionally calculated the rate of search nodes
|
|
|
|
|
explored per second and, across all experiments, \gecodeMzn achieves around 3\%
|
|
|
|
|
fewer nodes per second than \gecodeReplay. This overhead is caused by
|
|
|
|
|
propagating the additional constraints in \gecodeMzn. Overall, the experiments
|
|
|
|
|
demonstrate that the compilation approach is an effective and efficient way of
|
|
|
|
|
adding LNS to a modelling language with minimal changes to the solver.
|
|
|
|
|
adding \gls{lns} to a modelling language with minimal changes to the solver.
|
|
|
|
|
|
|
|
|
|
\section{Conclusions}
|
|
|
|
|
\label{sec:6-conclusion}
|
|
|
|
|