762 lines
52 KiB
TeX
762 lines
52 KiB
TeX
%************************************************
|
|
\chapter{Incremental Constraint Modelling}\label{ch:incremental}
|
|
%************************************************
|
|
|
|
\glsreset{rbmo}
|
|
\glsreset{incremental-rewriting}
|
|
|
|
\input{chapters/5_incremental_preamble}
|
|
|
|
\section{Modelling of Restart Based Meta-Optimisation}\label{sec:inc-modelling}
|
|
|
|
This section introduces a \minizinc{} extension that enables modellers to define \gls{meta-optimization} algorithms in \minizinc{}.
|
|
This extension is based on the construct introduced in \minisearch{} \autocite{rendl-2015-minisearch}, as summarised below.
|
|
|
|
\subsection{Meta-Optimisation in MiniSearch}\label{sec:inc-minisearch}
|
|
|
|
\minisearch{} introduced a \minizinc{} extension that enables modellers to express \gls{meta-optimization} inside a \minizinc{} model.
|
|
A \gls{meta-optimization} algorithm in \minisearch{} typically solves a given \minizinc{} model, performs some calculations on the solution, adds new \constraints{} and then solves again.
|
|
|
|
Most \gls{meta-optimization} definitions in \minisearch{} consist of two parts.
|
|
The first part is a declarative definition of any restriction to the \gls{search-space} that the \gls{meta-optimization} algorithm can apply, called a \gls{neighbourhood}.
|
|
In \minisearch{} these definitions can make use of the following function.
|
|
|
|
\begin{mzn}
|
|
function int: sol(var int: x)
|
|
\end{mzn}
|
|
|
|
It returns the value that variable \mzninline{x} was assigned to in the previous \gls{sol} (similar functions are defined for Boolean, float and set variables).
|
|
This allows the \gls{neighbourhood} to be defined in terms of the previous \gls{sol}.
|
|
In addition, a \gls{neighbourhood} definition will typically make use of the random number generators available in \minizinc{}.
|
|
|
|
\Cref{lst:inc-lns-minisearch-pred} shows a simple random \gls{neighbourhood}.
|
|
For each decision variable \mzninline{x[i]}, it draws a random number from a uniform distribution.
|
|
If it exceeds threshold \mzninline{destr_rate}, then it posts \constraints{} forcing \mzninline{x[i]} to take the same value from previous \gls{sol}.
|
|
For example, \mzninline{uniform_neighbourhood(x, 0.2)} would result in each variable in the \gls{array} \mzninline{x} having a 20\% chance of being unconstrained, and an 80\% chance of being assigned to the value it had in the previous \gls{sol}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_lns_minisearch_pred.mzn}
|
|
\caption{\label{lst:inc-lns-minisearch-pred} A simple random \gls{lns} predicate implemented in \minisearch{}}
|
|
\end{listing}
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_lns_minisearch.mzn}
|
|
\caption{\label{lst:inc-lns-minisearch} A simple \gls{lns} \gls{meta-optimization} implemented in \minisearch{}}
|
|
\end{listing}
|
|
|
|
The second part of a \minisearch{} \gls{meta-optimization} is the \gls{meta-optimization} algorithm itself.
|
|
\Cref{lst:inc-lns-minisearch} shows a basic \minisearch{} implementation of a basic \gls{lns}, called \mzninline{lns}.
|
|
It performs a fixed number of iterations, each invoking the \gls{neighbourhood} predicate \mzninline{uniform_neighbourhood} in a fresh scope.
|
|
This means that the \constraints{} only affect the current loop iteration.
|
|
It then searches for a solution (\mzninline{minimize_bab}) with a given timeout.
|
|
If the search does return a new solution, then it commits to that solution and it becomes available to the \mzninline{sol} function in subsequent iterations.
|
|
The \mzninline{lns} function also posts the \constraint{} \mzninline{obj < sol(obj)}, ensuring the objective value in the next iteration is strictly better than that of the current solution.
|
|
|
|
Although \minisearch{} enables the modeller to express \glspl{neighbourhood} in a declarative way, the definition of the \gls{meta-optimization} algorithm is rather unintuitive and difficult to debug, leading to unwieldy code for defining even simple algorithm.
|
|
Furthermore, the \minisearch{} implementation requires either a close integration of the backend \solver{} into the \minisearch{} system, or it drives the solver through the regular text file based \flatzinc{} interface.
|
|
This leads to a significant communication overhead.
|
|
|
|
To address these two issues, we propose to keep modelling \glspl{neighbourhood} as predicates, but define \gls{meta-optimization} algorithm from an imperative perspective.
|
|
|
|
We define a few additional \minizinc{} \glspl{annotation} and functions that
|
|
|
|
\begin{itemize}
|
|
\item allow us to express important aspects of the meta-optimization algorithm in a more convenient way,
|
|
\item and enable a simple \gls{rewriting} scheme that requires no additional communication with and only small, simple extensions of the target \solver{}.
|
|
\end{itemize}
|
|
|
|
\subsection{Restart Annotation}
|
|
|
|
Instead of the complex \minisearch{} definitions, we propose to add support for \glspl{meta-optimization} that are purely based on the notion of \glspl{restart}.
|
|
A \gls{restart} happens when a solver abandons its current search efforts, returns to its initial \gls{search-space}, and begins a new exploration.
|
|
Many \gls{cp} \solvers{} already provide support for controlling their restarting behaviour.
|
|
They can periodically restart after a certain number of nodes, or restart for every \gls{sol}.
|
|
Typically, \solvers{} also support posting additional \constraints{} upon restarting that are only valid for the particular \gls{restart} (\eg{} Comet \autocite{michel-2005-comet}).
|
|
These \constraints{} are ``retracted'' for the next \gls{restart}.
|
|
|
|
In its simplest form, we can therefore implement \gls{lns} by merely specifying a \gls{neighbourhood} predicate.
|
|
We then annotate the \mzninline{solve} item with the following \gls{annotation} to indicate the predicate should be invoked upon each restart.
|
|
|
|
\begin{mzn}
|
|
solve ::on_restart(my_neighbourhood) minimize cost;
|
|
\end{mzn}
|
|
|
|
Note that \minizinc{} currently does not support passing functions or predicates as arguments.
|
|
Calling the predicate, as in \mzninline{::on_restart(my_neighbourhood())}, would not have the correct semantics.
|
|
The predicate needs to be called for each restart.
|
|
As a workaround, we currently pass the name of the predicate to be called for each restart as a string (see the definition of the new \mzninline{on_restart} annotation in \cref{lst:inc-restart-ann}).
|
|
|
|
The second component of our \gls{lns} definition is the ``restarting strategy'', defining how much effort the \solver{} should put into each \gls{neighbourhood} (\ie{} \gls{restart}), and when to stop the overall search. We propose adding new search \glspl{annotation} to the \minizinc{} to control this behaviour.
|
|
The proposed \glspl{annotation} are shown in \cref{lst:inc-restart-ann}.
|
|
|
|
The \mzninline{restart_on_solution} annotation tells the solver to restart immediately for each \gls{sol}, rather than looking for the best one in each restart.
|
|
The \mzninline{restart_without_objective} tells it not to add \gls{bnb} \constraints{} on the \gls{objective}.
|
|
The other \mzninline{restart_X} annotations define different strategies for restarting the search when no \gls{sol} is found.
|
|
The \mzninline{timeout} annotation gives an overall time limit for the search.
|
|
Finally, the \mzninline{restart_limit} stops the search after a fixed number of \glspl{restart}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_restart_ann.mzn}
|
|
\caption{\label{lst:inc-restart-ann} New annotations to control the restarting
|
|
behaviour}
|
|
\end{listing}
|
|
|
|
\subsection{Advanced Meta-Optimisation Algorithms}
|
|
|
|
Although using just a \gls{restart} \glspl{annotation} by themselves allows us to run the basic \gls{lns}, more advanced \gls{meta-optimization} algorithms will require more than reapplying the same \gls{neighbourhood} time after time.
|
|
It is, for example, often beneficial to use several \gls{neighbourhood} definitions within the same \instance{}.
|
|
Different \glspl{neighbourhood} may be able to improve different aspects of a \gls{sol} at different phases of the search.
|
|
Adaptive \gls{lns} \autocite{ropke-2006-adaptive, pisinger-2007-heuristic} is a variant of \gls{lns} that keeps track of the \glspl{neighbourhood} that led to improvements and favours them for future iterations.
|
|
This has proven to be a highly successful approach to \gls{lns}.
|
|
However, even a simpler scheme that applies several \glspl{neighbourhood} in a round-robin fashion can be effective.
|
|
|
|
In \minisearch{}, these adaptive or round-robin approaches can be implemented using ``state variables'', which support destructive update (overwriting the value they store).
|
|
In this way, the \minisearch{} strategy can store values to be used in later iterations.
|
|
We use the ``\solver{} state'' instead.
|
|
All values are stored in normal \variables{}.
|
|
We then access them using two simple functions that reveal the solver state of the previous \gls{restart}.
|
|
This approach is sufficient for expressing many \gls{meta-optimization} algorithms, and its implementation is much simpler.
|
|
|
|
\paragraph{State access and initialisation}
|
|
|
|
The state access functions are defined in \cref{lst:inc-state-access}.
|
|
Function \mzninline{status} returns the status of the previous restart, namely:
|
|
|
|
\begin{description}
|
|
\item[\mzninline{START}] there has been no \gls{restart} yet;
|
|
\item[\mzninline{UNSAT}] the \gls{search-space} of the last \gls{restart} was \gls{unsat};
|
|
\item[\mzninline{SAT}] the last \gls{restart} found a \gls{sol};
|
|
\item[\mzninline{OPT}] the last \gls{restart} found and proved an \gls{opt-sol} in its \gls{search-space};
|
|
\item[\mzninline{UNKNOWN}] the last \gls{restart} did not find a solution, but did not exhaust its \gls{search-space}.
|
|
\end{description}
|
|
|
|
\noindent{}Function \mzninline{last_val} allows modellers to access the last value assigned to a \variable{}.
|
|
The value is undefined when \mzninline{status()=START}.
|
|
Like \mzninline{sol}, it has versions for all basic \variable{} types.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_state_access.mzn}
|
|
\caption{\label{lst:inc-state-access} Functions for accessing previous \solver{} states}
|
|
\end{listing}
|
|
|
|
In order to be able to initialise the \variables{} used for state access, we interpret \mzninline{on_restart} also before initial search using the same semantics.
|
|
As such, the predicate is also called before the first ``real'' \gls{restart}.
|
|
Any \constraint{} posted by the predicate will be retracted for the next \gls{restart}.
|
|
|
|
\paragraph{Parametric neighbourhood selection predicates}
|
|
|
|
We define standard \gls{neighbourhood} selection strategies as predicates that are parametric over the \glspl{neighbourhood} they should apply.
|
|
For example, we can define a strategy \mzninline{basic_lns} that applies an \gls{lns} \gls{neighbourhood}.
|
|
Since \mzninline{on_restart} now also includes the initial search, we apply the \gls{neighbourhood} only if the current status is not \mzninline{START}, as shown in the following predicate.
|
|
|
|
\begin{mzn}
|
|
predicate basic_lns(var bool: nbh) = (status()!=START -> nbh);
|
|
\end{mzn}
|
|
|
|
In order to use this predicate with the \mzninline{on_restart} annotation, we cannot simply call a \gls{neighbourhood} predicate as follows.
|
|
|
|
\begin{mzn}
|
|
basic_lns(uniform_neighbourhood(x, 0.2))
|
|
\end{mzn}
|
|
|
|
\noindent{}Calling \mzninline{uniform_neighbourhood} like this would result in a single evaluation of the predicate, since \minizinc{} employs a call-by-value evaluation strategy.
|
|
Furthermore, the \mzninline{on_restart} annotation only accepts the name of a nullary predicate.
|
|
Therefore, users have to define their overall strategy in a new predicate.
|
|
\Cref{lst:inc-basic-complete} shows a complete example of a basic \gls{lns} model.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_basic_complete.mzn}
|
|
\caption{\label{lst:inc-basic-complete} Complete \gls{lns} example}
|
|
\end{listing}
|
|
|
|
We can also define round-robin and adaptive strategies using these primitives.
|
|
\Cref{lst:inc-round-robin} defines a round-robin \gls{lns} meta-heuristic, which cycles through a list of \mzninline{N} neighbourhoods \mzninline{nbhs}.
|
|
To do this, it uses the \variable{} \mzninline{select}.
|
|
In the initialisation phase (\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which means none of the \glspl{neighbourhood} is activated.
|
|
In any following \gls{restart}, \mzninline{select} is incremented modulo \mzninline{N}, by accessing the last value assigned in a previous \gls{restart}.
|
|
This will activate a different \gls{neighbourhood} for each subsequent \gls{restart} (\lref{line:6:roundrobin:post}).
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_round_robin.mzn}
|
|
\caption{\label{lst:inc-round-robin} A predicate providing the round-robin
|
|
meta-heuristic}
|
|
\end{listing}
|
|
|
|
%\paragraph{Adaptive \gls{lns}}
|
|
|
|
For adaptive \gls{lns}, a simple strategy is to change the size of the \gls{neighbourhood} depending on whether the previous size was successful or not.
|
|
\Cref{lst:inc-adaptive} shows an adaptive version of the \mzninline{uniform_neighbourhood} that increases the number of free \variables{} when the previous \gls{restart} failed, and decreases it when it succeeded, within the bounds \([0.6,0.95]\).
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_adaptive.mzn}
|
|
\caption{\label{lst:inc-adaptive} A simple adaptive neighbourhood}
|
|
\end{listing}
|
|
|
|
\subsection{Optimisation strategies}
|
|
|
|
The \gls{meta-optimization} algorithms we have seen so far rely on the default behaviour of \minizinc{} \solvers{} to use \gls{bnb} for optimisation: when a new \gls{sol} is found, the \solver{} adds a \constraint{} to the remainder of the search to only accept better \glspl{sol}, as defined by the \gls{objective} in the \mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve} item.
|
|
When combined with \glspl{restart} and \gls{lns}, this is equivalent to a simple hill-climbing meta-heuristic.
|
|
|
|
We can use the constructs introduced above to implement alternative meta-heuristics such as simulated annealing.
|
|
We use the \mzninline{restart_without_objective} \gls{annotation}, in particular, to tell the solver not to add the \gls{bnb} \constraint{} on \gls{restart}.
|
|
It will still use the declared \gls{objective} to decide whether a new solution is globally the best one seen so far, and only output those.
|
|
This maintains the convention of \minizinc{} \solvers{} that the last \gls{sol} printed at any point in time is the currently best known one.
|
|
|
|
With \mzninline{restart_without_objective}, the \gls{restart} predicate is now responsible for constraining the \gls{objective}.
|
|
Note that a simple hill-climbing (for minimisation) can still be defined easily in this context as follows.
|
|
|
|
\begin{mzn}
|
|
predicate hill_climbing() = status() != START -> _objective < sol(_objective);
|
|
\end{mzn}
|
|
|
|
It takes advantage of the fact that the declared \gls{objective} is available through the built-in \variable{} \mzninline{_objective}.
|
|
A more interesting example is a simulated annealing strategy.
|
|
When using this strategy, the \glspl{sol} that the \solver{} finds are no longer required to steadily improve in quality.
|
|
Instead, we ask the \solver{} to find a \gls{sol} that is a significant improvement over the previous \gls{sol}.
|
|
Over time, we decrease the amount by which we require the \gls{sol} needs to improve until we are just looking for any improvements.
|
|
This \gls{meta-optimization} can help improve the qualities of \gls{sol} quickly and thereby reaching the \gls{opt-sol} quicker.
|
|
\Cref{lst:inc-sim-ann} show how this strategy is also easy to express using \gls{rbmo}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_sim_ann.mzn}
|
|
\caption{\label{lst:inc-sim-ann}A predicate implementing a simulated annealing search.}
|
|
\end{listing}
|
|
|
|
So far, the algorithms used have been for versions of incomplete search or we have trusted the \solver{} to know when to stop searching.
|
|
However, for the following algorithms the \solver{} will (or should) not be able to determine whether the search is complete.
|
|
Instead, we introduce the following function that can be used to signal to the solver that the search is complete.
|
|
|
|
\begin{mzn}
|
|
function var bool: complete();
|
|
\end{mzn}
|
|
|
|
\noindent{}When the result of this function is said to be \mzninline{true}, then search is complete.
|
|
If any \gls{sol} was found, it is declared an \gls{opt-sol}.
|
|
|
|
Using the same methods it is also possible to describe optimisation strategies with multiple \glspl{objective}.
|
|
An example of such a strategy is lexicographic search.
|
|
Lexicographic search can be employed when there is a strict order between the importance of different \variables{}.
|
|
It required that, once a \gls{sol} is found, each subsequent \gls{sol} must either improve the first \gls{objective}, or have the same value for the first \gls{objective} and improve the second \gls{objective}, or have the same value for the first two \glspl{objective} and improve the third \gls{objective}, and so on.
|
|
A predicate that implements lexicographic search is shown in \cref{lst:inc-lex-min}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_lex_min.mzn}
|
|
\caption{\label{lst:inc-lex-min}A predicate implementing lexicographic search.}
|
|
\end{listing}
|
|
|
|
The lexicographic search changes the \gls{objective} at each stage in the evaluation.
|
|
Initially, the search is in stage one: it tries to find an \gls{opt-sol} for the first \gls{objective}.
|
|
Whenever we find a \gls{sol} we stay in the same stage, but constrain the \gls{search-space} to find a better \gls{sol}.
|
|
When there exists no better \gls{sol}, we move on to the next stage, trying to find the \gls{opt-sol} for the next \gls{objective} while the earlier \glspl{objective} maintain their value.
|
|
This process is repeated until reach the final \gls{objective}.
|
|
When we reach a stage without an associated \gls{objective}, we are guaranteed to have reached the lexicographic optimal.
|
|
Our search is thus complete.
|
|
|
|
There is not always a clear order of importance for different \glspl{objective} in a problem.
|
|
We can instead look for a number of diverse \glspl{sol} and allow the user to pick the most acceptable options.
|
|
The predicate in \cref{lst:inc-pareto} shows a \gls{meta-optimization} for the Pareto optimality of a pair of \glspl{objective}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_pareto.mzn}
|
|
\caption{\label{lst:inc-pareto}A predicate implementing pareto optimality search for two \glspl{objective}.}
|
|
\end{listing}
|
|
|
|
In this implementation we keep track of the number of \glspl{sol} found so far using \mzninline{nsol}.
|
|
There is a maximum number we can handle, \mzninline{ms}.
|
|
At the start the number of solutions is zero.
|
|
If we find no \glspl{sol}, then we finish the entire search.
|
|
Otherwise, we record that we have one more \gls{sol}.
|
|
We store the \gls{sol} values in \mzninline{s1} and \mzninline{s2} \glspl{array}.
|
|
Before each \gls{restart} we add \constraints{} removing Pareto dominated \glspl{sol}, based on each previous \gls{sol}.
|
|
|
|
\section{Rewriting of Meta-Optimisation Algorithms}\label{sec:inc-solver-extension}
|
|
|
|
The \glspl{neighbourhood} defined in the previous section can be executed with \minisearch{} by adding support for the \mzninline{status} and \mzninline{last_val} built-in functions, and by defining the main \gls{restart} loop.
|
|
The \minisearch{} evaluator will then call a \solver{} to produce a \gls{sol}, and evaluate the \gls{neighbourhood} predicate, incrementally producing new \flatzinc{} to be added to the next round of solving.
|
|
|
|
While this is a viable approach, our goal is to keep \gls{rewriting} and solving separate, by embedding the entire \gls{meta-optimization} algorithms into the \gls{slv-mod}.
|
|
|
|
This section introduces such a \gls{rewriting} approach.
|
|
It only requires simple modifications of the \gls{rewriting} process, and the produced \gls{slv-mod} can be executed by standard \gls{cp} \solvers{} with a small set of simple extensions.
|
|
|
|
\subsection{Rewriting Overview}
|
|
|
|
The \gls{neighbourhood} definitions from the previous section have an important property that makes them easy to rewrite: they are defined in terms of standard \minizinc{} expressions, except for a few new functions.
|
|
When the \gls{neighbourhood} predicates are evaluated in the \minisearch{} way, the \minisearch{} \interpreter{} implements those functions.
|
|
It has to compute the correct value whenever a predicate is evaluated.
|
|
|
|
Instead, the \gls{rewriting} scheme presented below uses a limited form of partial evaluation: \parameters{}, known during \gls{rewriting} will be fully evaluated; those only known during the solving, such as the result of a call to any of the new functions (\mzninline{sol}, \mzninline{status}, etc.), are replaced by \variables{}.
|
|
This essentially \textbf{turns the new functions into \gls{native} \constraints{}} for the target \solver{}.
|
|
The \gls{neighbourhood} predicate can then be added as a \constraint{} to the \gls{slv-mod}.
|
|
The evaluation is performed by hijacking the solver's own capabilities: It will automatically perform the evaluation of the new functions by propagating the \gls{native} \constraints{}.
|
|
|
|
To compile a \gls{meta-optimization} algorithms to a \gls{slv-mod}, the \gls{rewriting} process performs the following four simple steps.
|
|
|
|
\begin{enumerate}
|
|
\item It replaces the annotation \mzninline{::on_restart("X")} with a call to predicate \mzninline{X}.
|
|
\item Inside predicate \mzninline{X} and any other predicate called recursively from \mzninline{X}: it must treat any call to functions \mzninline{sol}, \mzninline{status}, and \mzninline{last_val} as returning a \variable{}; and rename calls to random functions, e.g., \mzninline{uniform} to \mzninline{uniform_slv}, in order to distinguish them from their standard library versions.
|
|
\item It converts any expression containing a call from step 2 to be a \variable{} to ensure the functions are compiled as \constraints{}, rather than directly evaluated during \gls{rewriting}.
|
|
\item It rewrites the resulting model using an extension of the \minizinc{} standard library that provides declarations for these functions, as defined in the following of subsection.
|
|
\end{enumerate}
|
|
|
|
These transformations will not change the code of many \gls{neighbourhood} definitions, since the functions are often used in positions that accept both \parameters{} and \variables{}.
|
|
For example, the \mzninline{uniform_neighbourhood} predicate from \cref{lst:inc-lns-minisearch-pred} uses \mzninline{uniform(0.0, 1.0)} in an \mzninline{if} expression, and \mzninline{sol(x[i])} in an equality \constraint{}.
|
|
Both expressions can be rewritten when the functions return a \variable{}.
|
|
|
|
\subsection{Rewriting the new functions}
|
|
|
|
We can rewrite \instances{} that contain the new functions by extending the \minizinc{} standard library using the following definitions.
|
|
|
|
\paragraph{\mzninline{status}}
|
|
|
|
\Cref{lst:inc-status} shows the definition of the \mzninline{status} function.
|
|
It simply replaces the functional form by a predicate \mzninline{status} (declared in \lref{line:6:status}), which constrains its local variable argument \mzninline{stat} to take the status value.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_status.mzn}
|
|
\caption{\label{lst:inc-status} MiniZinc definition of the \mzninline{status} function}
|
|
\end{listing}
|
|
|
|
\paragraph{\mzninline{sol} and \mzninline{last_val}}
|
|
|
|
The \mzninline{sol} function is overloaded for different types.
|
|
Our \glspl{slv-mod} does not support overloading.
|
|
Therefore, we produce type-specific \gls{native} \constraints{} for every type of \gls{native} \variable{} (\eg{} \mzninline{int_sol(x, xi)}, and \mzninline{bool_sol(x, xi)}).
|
|
The resolving of the \mzninline{sol} function into these specific \gls{native} \constraints{} is done using an overloaded definition, like the one shown in \cref{lst:inc-int-sol} for integer \variables{}.
|
|
If the value of the \variable{} has become known during \gls{rewriting}, then we use its \gls{fixed} value instead.
|
|
Otherwise, the function call is rewritten into a type specific \mzninline{int_sol} \gls{native} \constraint{} that will be executed by the solver.
|
|
We use the current \domain{} of the \variable{}, \mzninline{dom(x)}, as the \domain{} the \variable{} returned by \mzninline{sol}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_sol_function.mzn}
|
|
\caption{\label{lst:inc-int-sol} MiniZinc definition of the \mzninline{sol}
|
|
function for integer variables}
|
|
\end{listing}
|
|
|
|
The compilation of \mzninline{last_val} is similar to that for \mzninline{sol}.
|
|
|
|
\paragraph{Random number functions}
|
|
|
|
Calls to the random number functions have been renamed by appending \texttt{\_slv}, so that they are not simply evaluates directly.
|
|
The definition of these new functions follows the same pattern as for \mzninline{sol}, \mzninline{status}, and \mzninline{last_val}.
|
|
The \minizinc{} definition of the \mzninline{uniform_nbh} function is shown in \Cref{lst:inc-int-rnd}\footnote{Random number functions need to be marked as \mzninline{::impure} for the compiler not to apply \gls{cse} when they are called multiple times with the same arguments.}.
|
|
Note that the function accepts \variable{} as its arguments \mzninline{l} and \mzninline{u}.
|
|
As such, it can be used in combination with other functions, such as \mzninline{sol}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_uniform_slv.mzn}
|
|
\caption{\label{lst:inc-int-rnd} MiniZinc definition of the
|
|
\mzninline{uniform_nbh} function for floats}
|
|
\end{listing}
|
|
|
|
\paragraph{\mzninline{complete}}
|
|
|
|
The implementation of the \mzninline{complete} function should merely always return the same Boolean \variable{}.
|
|
However, since functions returning Boolean \variables{} are equivalent to a predicate, it is actually the \gls{reif} of \mzninline{complete} that gets used.
|
|
As such, we can define \mzninline{complete} as shown in \cref{lst:inc-complete}.
|
|
Note that the \gls{slv-mod} will actually include the \mzninline{complete_reif} \gls{native} \constraint{}.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_complete.mzn}
|
|
\caption{\label{lst:inc-complete} MiniZinc definition of the \mzninline{complete} function.}
|
|
\end{listing}
|
|
|
|
\subsection{Solver Support for Restart Based Native Constraints}
|
|
|
|
We will now show the minimal extensions required from a \solver{} to interpret the new \gls{native} \constraints{} and, consequently, to execute \gls{meta-optimization} definitions expressed in \minizinc{}.
|
|
|
|
First, the \solver{} needs to parse and support the \gls{restart} \glspl{annotation} of \cref{lst:inc-restart-ann}.
|
|
Many \solvers{} already support all this functionality.
|
|
Second, the \solver{} needs to be able to parse the new \constraints{} \mzninline{status}, and all versions of \mzninline{sol}, \mzninline{last_val}, random number functions like \mzninline{float_uniform}, and \mzninline{complete}.
|
|
In addition, for the new \constraints{} the \solver{} is extended using the following algorithms.
|
|
|
|
\begin{description}
|
|
\item[\mzninline{status(s)}] Record the status of the previous \gls{restart}, and fix \mzninline{s} to the recorded status.
|
|
\item[\mzninline{sol(x, sx)} (variants)] Constrain \mzninline{sx} to be equal to the value of \mzninline{x} in the incumbent solution.
|
|
If there is no incumbent solution, it has no effect.
|
|
\item[\mzninline{last_val(x, lx)} (variants)] Constrain \mzninline{lx} to take the last value assigned to \mzninline{x} during search.
|
|
If no value was ever assigned, it has no effect.
|
|
Note that many solvers (in particular \gls{sat} and \gls{lcg} solvers) already track \mzninline{last_val} for their \variables{} for use in search.
|
|
To support \gls{rbmo} a \solver{} must at least track the last value of each of the \variables{} involved in such a \constraint{}.
|
|
This is straightforward by using a \mzninline{last_val} \gls{propagator}.
|
|
It wakes up whenever its argument is fixed, and updates the last value (a non-\gls{backtrack}able value).
|
|
\item[Random number functions] Fix their \variable{} argument to a random number in the appropriate probability distribution.
|
|
\item[\mzninline{complete_reif(c)}] Stop and mark to search as ``complete'' when its \variable{} argument is forced to be true.
|
|
\end{description}
|
|
|
|
Importantly, these \constraints{} need to be propagated in a way that their effects can be undone for the next \gls{restart}.
|
|
Typically, this means the solver must not propagate these \constraints{} until it can \gls{backtrack} it changes.
|
|
|
|
Modifying a solver to support this functionality is straightforward if it already has a mechanism for posting \constraints{} during restarts.
|
|
We have implemented these extensions for both \gls{gecode} (214 new lines of code) and \gls{chuffed} (173 new lines of code).
|
|
|
|
For example, consider the model from \cref{lst:inc-basic-complete} again that uses the following \gls{meta-optimization} algorithms.
|
|
|
|
\begin{mzn}
|
|
basic_lns(uniform_neighbourhood(x, 0.2))}
|
|
\end{mzn}
|
|
|
|
\Cref{lst:inc-flat-pred} shows a part of the resulting \gls{slv-mod}.
|
|
\Lrefrange{line:6:status:start}{line:6:status:end} define a Boolean variable \mzninline{b1} that is true if-and-only-if the status is not \mzninline{START}.
|
|
The second block of code, \lrefrange{line:6:x1:start}{line:6:x1:end}, represents the \gls{decomp} of the following expression.
|
|
|
|
\begin{mzn}
|
|
(status() != START /\ uniform(0.0, 1.0) > 0.2) -> x[1] = sol(x[1])
|
|
\end{mzn}
|
|
|
|
It is the result of merging the implication from the \mzninline{basic_lns} predicate with the \mzninline{if} expression from \mzninline{uniformNeighbourhood}.
|
|
The code first introduces and constrains a variable for the random number, then adds two Boolean \variables{}: \mzninline{b2} is constrained to be true if-and-only-if the random number is greater than 0.2; while \mzninline{b3} is constrained to be the conjunction of the two.
|
|
\Lref{line:6:x1:sol} constrains \mzninline{x1} to be the value of \mzninline{x[1]} in the previous \gls{sol}.
|
|
Finally, the \gls{half-reified} \constraint{} in \lref{line:6:x1:end} implements the following expression.
|
|
|
|
\begin{mzn}
|
|
b3 -> x[1] = sol(x[1])
|
|
\end{mzn}
|
|
|
|
We have omitted the similar code generated for \mzninline{x[2]} to \mzninline{x[n]}.
|
|
Note that the \flatzinc{} \gls{slv-mod} shown here has been simplified for presentation.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_basic_complete_transformed.mzn}
|
|
\caption{\label{lst:inc-flat-pred} \flatzinc{} that results from compiling \\
|
|
\mzninline{basic_lns(uniformNeighbourhood(x,0.2))}.}
|
|
\end{listing}
|
|
|
|
The first time the \solver{} is invoked, it sets \mzninline{s} to one (\mzninline{START}).
|
|
\Gls{propagation} will fix \mzninline{b1} to \mzninline{false} and \mzninline{b3} to \mzninline{false}.
|
|
Therefore, the implication in \lref{line:6:x1:end} is not activated, leaving \mzninline{x[1]} unconstrained.
|
|
The \gls{neighbourhood} \constraints{} are effectively switched off.
|
|
|
|
When the \solver{} \glspl{restart}, all the special \glspl{propagator} are re-executed.
|
|
Now \mzninline{s} is not one, and \mzninline{b1} is set to \mzninline{true}.
|
|
The \mzninline{float_random} \gls{propagator} assigns \mzninline{rnd1} a new random value and, depending on whether it is greater than \mzninline{0.2}, the Boolean \variables{} \mzninline{b2}, and consequently \mzninline{b3} is assigned.
|
|
If it is \mzninline{true}, the \constraint{} in \lref{line:6:x1:end} becomes active and assigns \mzninline{x[1]} to its value in the previous \gls{sol}.
|
|
|
|
Furthermore, it is not strictly necessary to guard \mzninline{int_uniform} against being invoked before the first solution is found.
|
|
The \mzninline{sol} \constraints{} will simply not propagate anything in case no \gls{sol} has been recorded yet, but we use this simple example to illustrate how these Boolean conditions are rewritten and evaluated.
|
|
|
|
\section{An Incremental Constraint Modelling Interface}%
|
|
\label{sec:inc-incremental-compilation}
|
|
|
|
A universal approach to the incremental usage of \cmls{}, and \gls{meta-optimization}, is to allow incremental changes to an \instance{} of a \cmodel{}.
|
|
To solve these changing \instances{}, they have to be rewritten repeatedly to \glspl{slv-mod}.
|
|
In this section we extend our architecture with an incremental constraint modelling interface that allows the modeller to change an \instance{}.
|
|
For changes made using this interface, the architecture can employ \gls{incremental-rewriting} to minimise the required \gls{rewriting}.
|
|
|
|
As such, the \microzinc{} \interpreter{} is extended to be able \textbf{add} and \textbf{remove} \nanozinc{} \constraints{} from/to an existing \nanozinc{} program.
|
|
Adding new \constraints{} is straightforward.
|
|
\nanozinc{} is already processed one \constraint{} at a time, in any order.
|
|
The new \constraints{} can be added to the program, and the \gls{rewriting} can proceed as normal.
|
|
|
|
Removing a \constraint{}, however, is not so simple.
|
|
When we remove a \constraint{}, all effects the \constraint{} had on the \nanozinc{} program have to be undone.
|
|
This includes results of \gls{propagation}, \gls{cse} and other simplifications.
|
|
|
|
\begin{example}\label{ex:inc-incremental}
|
|
Consider the following \minizinc\ fragment:
|
|
|
|
\begin{mzn}
|
|
constraint x < 10;
|
|
constraint y < x;
|
|
\end{mzn}
|
|
|
|
After evaluating the first \constraint{}, the \domain{} of \mzninline{x} is changed to be less than ten.
|
|
Evaluating the second \constraint{} causes the \domain{} of \mzninline{y} to be less than nine.
|
|
If we now, however, try to remove the first \constraint{}, then it is not just the direct inference on the \domain{} of \mzninline{x} that has to be undone, but also any further effects of those changes --- in this case, the changes to the \domain{} of \mzninline{y}.
|
|
|
|
\end{example}
|
|
|
|
The addition of \constraints{} that can later be removed, is similar to making \glspl{search-decision} in a \gls{cp} \solver{} (see \cref{subsec:back-cp}).
|
|
As such, we can support the removal of \constraints{} in reverse chronological order by \gls{backtrack}ing.
|
|
The \microzinc{} \interpreter{} maintains \gls{trail} of the changes to be undone when a \constraint{} is removed.
|
|
The \gls{trail} records all changes to the \nanozinc{} program:
|
|
|
|
\begin{itemize}
|
|
\item the addition or removal of new \variables{} or \constraints{},
|
|
\item changes made to the \domains{} of \variables{},
|
|
\item additions to the \gls{cse} table, and
|
|
\item substitutions made due to equality \gls{propagation}.
|
|
\end{itemize}
|
|
|
|
These changes can be caused by the \gls{rewriting} of a \constraint{}, \gls{propagation}, or \gls{cse}.
|
|
When a \constraint{} is removed, its corresponding changes are undone by reversing all actions recorded on the \gls{trail} up to the point where the \constraint{} was added.
|
|
|
|
In order to limit the amount of \gls{trail}ing required, the programmer must create explicit ``choice points'' to which the system state can be reset.
|
|
In particular, this means that if no choice point was created before the initial \instance{} was rewritten, then this \gls{rewriting} can be performed without any \gls{trail}ing.
|
|
|
|
\begin{example}\label{ex:inc-trail}
|
|
|
|
Let us once again consider the following \minizinc{} fragment from \cref{ex:rew-absreif}.
|
|
|
|
\begin{mzn}
|
|
constraint abs(x) > y \/ c;
|
|
constraint c;
|
|
\end{mzn}
|
|
|
|
After rewriting it results in the following \nanozinc{} program.
|
|
|
|
\begin{nzn}
|
|
var true: c;
|
|
var -10..10: x;
|
|
var -10..10: y;
|
|
\end{nzn}
|
|
|
|
Assume that we added a choice point before posting the \constraint{} \mzninline{c}.
|
|
Then the \gls{trail} stores the inverse of all modifications that were made to the \nanozinc{} as a result of \mzninline{c}.
|
|
The following code fragment illustrates the elements in this \gls{trail}.
|
|
The reversal actions are annotated on the different elements.
|
|
The \mzninline{attach} and \mzninline{detach} actions signal the \interpreter{} to attach/detach a \constraint{} to the argument \variable{}, or the global \cmodel{} in case the argument is \mzninline{true}.
|
|
The \mzninline{add} action tells to \interpreter{} to add a \variable{} to the \nanozinc{} program.
|
|
Finally, the \mzninline{set_domain} action makes the \interpreter{} restore the \domain{} using declaration to which it was attached.
|
|
|
|
\begin{nzn}
|
|
% Posted c
|
|
constraint c ::detach(true);
|
|
% Propagated c = true
|
|
var bool: c ::set_domain;
|
|
constraint c ::attach(true);
|
|
% Simplified bool_or(b1, true) = true
|
|
constraint bool(b1, c) ::attach(true);
|
|
% b became unused...
|
|
constraint int_gt_reif(z, y, b) ::attach(b);
|
|
var bool: b ::add;
|
|
% then z became unused
|
|
constraint int_abs(x, z) ::attach(z);
|
|
var int: z ::add;
|
|
\end{nzn}
|
|
|
|
To reconstruct the \nanozinc{} program at the choice point, we simply apply the changes recorded in the \gls{trail}, in reverse order.
|
|
\end{example}
|
|
|
|
\subsection{Incremental Solving}
|
|
|
|
Ideally, the incremental changes made by the \interpreter{} would also be applied incrementally to the \solver{}.
|
|
This requires the \solver{} to support both the dynamic addition and removal of \variables{} and \constraints{}.
|
|
While some solvers can support this functionality, most \solvers{} have limitations.
|
|
We therefore define the following three interfaces, using which we can apply the changes to the \solvers{}.
|
|
|
|
\begin{itemize}
|
|
|
|
\item Using a non-incremental interface, the \solver{} is reinitialised with the updated \nanozinc\ program every time.
|
|
In this case, we still get a performance benefit from the improved \gls{rewriting} time, but not from incremental solving.
|
|
|
|
\item Using a \textit{warm-starting} interface, the \solver{} is reinitialised with the updated \gls{slv-mod} as above, but it is also given a previous \gls{sol} to initialise some internal data structures.
|
|
In particular for mathematical programming \solvers{}, this can result in dramatic performance gains compared to ``cold-starting'' the \solver{} every time.
|
|
|
|
\item Using a fully incremental interface, the \solver{} is instructed to apply the changes made by the \interpreter{}.
|
|
In this case, the \gls{trail} data structure is used to compute the set of \nanozinc{} changes since the last choice point.
|
|
|
|
\end{itemize}
|
|
|
|
|
|
\section{Experiments}\label{sec:inc-experiments}
|
|
|
|
In this section we present two experiments to test the efficiency and potency of the incremental methods introduced in this chapter.
|
|
|
|
In our first experiment, we consider the effectiveness \gls{meta-optimization} within during solving.
|
|
In particular, we investigate a round-robin \gls{lns} implemented using \gls{rbmo}.
|
|
On three different \minizinc{} models we compare this approach with solving the \instances{} directly using 2 different \solvers{}.
|
|
For one of the \solvers{}, we also compare with an oracle approach that can directly apply the exact same \gls{neighbourhood} as our \gls{rbmo}, without the need for computation.
|
|
As such, we show that the use of \gls{rbmo} introduces a insignificant computational overhead.
|
|
|
|
Our second experiment compares the performance of using \minizinc{} incrementally.
|
|
We compare our two methods, \gls{rbmo} and the incremental constraint modelling interface, against the baseline of continuously \gls{rewriting} and reinitialising the \solver{}.
|
|
For this comparison compare the time that is required to (repeatedly) rewrite an \instance{} and the time required by the \solver{}.
|
|
The first model contains a lexicographic objective.
|
|
The second model is shared between the two experiments and uses a round-robin \gls{lns} approach.
|
|
We compare the application of a fixed number of \glspl{neighbourhood}.
|
|
|
|
A description of the used computational environment, \minizinc{} instances, and versioned software has been included in \cref{ch:benchmarks}.
|
|
|
|
\subsection{Meta-Optimisation Solving}
|
|
|
|
We will now show that a solver that evaluates the rewritten \gls{meta-optimization} specifications can (a) be effective and (b) incur only a small overhead compared to a dedicated implementation of the \glspl{neighbourhood}.
|
|
|
|
To measure the overhead, we implemented \gls{rbmo} in \gls{gecode} \autocite{gecode-2021-gecode}.
|
|
The resulting solver has been instrumented to also output the \domains{} of all \variables{} after propagating the new \gls{native} \constraints{}.
|
|
We implemented another extension to \gls{gecode} that simply reads the stream of \variable{} \domains{} for each \gls{restart}, essentially ``replaying'' the \gls{meta-optimization} without incurring any overhead for evaluating the \glspl{neighbourhood} or handling the additional \variables{} and \constraints{}.
|
|
Note that this is a conservative estimate of the overhead: this extension has to perform less work than any real \gls{meta-optimization} implementation.
|
|
|
|
In addition, we also present benchmark results for the standard release of \gls{gecode}; as well as the development version of \gls{chuffed}; and \gls{chuffed} performing \gls{rbmo}.
|
|
These experiments illustrate that the \gls{meta-optimization} implementations indeed perform well compared to the standard \solvers{}.
|
|
|
|
We ran experiments for three models from the MiniZinc Challenge \autocite{stuckey-2010-challenge, stuckey-2014-challenge} (\texttt{gbac}, \texttt{steelmillslab}, and \texttt{rcpsp-wet}).
|
|
The \instances{} are rewritten using the \minizinc{} 2.5.5 \compiler{}, adjusted to rewrite \gls{rbmo} specifications.
|
|
For each model we use a round-robin \gls{lns} approach, shown in \cref{lst:inc-round-robin}, of two \glspl{neighbourhood}: a \gls{neighbourhood} that destroys 20\% of the main \variables{} (\cref{lst:inc-lns-minisearch-pred}) and a structured \gls{neighbourhood} for the model (described below).
|
|
For each solving method we show the cumulative integral of the objective values for all \instances{} of the model.
|
|
The underlying search strategy used is the fixed search strategy defined in the model.
|
|
The restart strategy uses the following \glspl{annotation}.
|
|
|
|
\begin{mzn}
|
|
::restart_constant(250) ::restart_on_solution
|
|
\end{mzn}
|
|
|
|
The benchmarks are repeated with 10 different random seeds and the average is shown.
|
|
The overall time out for each run is 120 seconds.
|
|
|
|
\subsubsection{GBAC}%
|
|
\label{ssubsec:inc-exp-gbac1}
|
|
|
|
The \gls{gbac} problem comprises courses having a specified number of credits and lasting a certain number of periods, load limits of courses for each period, prerequisites for courses, and preferences of teaching periods for professors.
|
|
A detailed description of the problem is given in \autocite{chiarandini-2012-gbac}.
|
|
The main decisions are to assign courses to periods, which is done via the \variables{} \mzninline{period_of} in the \minizinc{} model.
|
|
\cref{lst:inc-free-period} shows the neighbourhood chosen, which randomly picks one period and frees all courses that are assigned to it.
|
|
|
|
\begin{listing}[b]
|
|
\mznfile{assets/listing/inc_gbac_neighbourhood.mzn}
|
|
\caption{\label{lst:inc-free-period}\gls{gbac}: \gls{neighbourhood} freeing all courses in a period.}
|
|
\end{listing}
|
|
|
|
\begin{figure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_obj_gecode_gbac.pdf}
|
|
\caption{\label{subfig:inc-obj-gecode-gbac}\gls{gecode}}
|
|
\end{subfigure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_obj_chuffed_gbac.pdf}
|
|
\caption{\label{subfig:inc-obj-chuffed-gbac}\gls{chuffed}}
|
|
\end{subfigure}
|
|
\caption{\label{fig:inc-obj-gbac}\gls{gbac}: integral of cumulative objective value of solving 5 \instances{}.}
|
|
\end{figure}
|
|
|
|
The result for the \gls{gbac} in \cref{fig:inc-obj-gbac} show that the overhead introduced by \gls{gecode} using \gls{rbmo} with regards to the replaying the \glspl{neighbourhood} is quite low.
|
|
The lines in the graph do not show any significant differences or delays.
|
|
Both their result are much better than the baseline \gls{gecode} \solver{}.
|
|
Since learning is not very effective for \gls{gbac}, the performance of \gls{chuffed} is similar to \gls{gecode}.
|
|
The use of \gls{lns} again significantly improves over standard \gls{chuffed}.
|
|
|
|
\subsubsection{Steel Mill Slab}
|
|
|
|
The steel mill slab design problem consists of cutting slabs into smaller ones, so that all orders are fulfilled while minimising the wastage.
|
|
The steel mill only produces slabs of certain sizes, and orders have both a size and a colour.
|
|
We have to assign orders to slabs, with at most two different colours on each slab.
|
|
The model uses the \variables{} \mzninline{assign} for deciding which order is assigned to which slab.
|
|
\Cref{lst:inc-free-bin} shows a structured \gls{neighbourhood} that randomly selects a slab and frees the orders assigned to it in the incumbent \gls{sol}.
|
|
These orders can then be freely reassigned to any other slab.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_steelmillslab_neighbourhood.mzn}
|
|
\caption{\label{lst:inc-free-bin}Steel mill slab: \gls{neighbourhood} that frees
|
|
all orders assigned to a selected slab.}
|
|
\end{listing}
|
|
|
|
|
|
\begin{figure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_obj_gecode_steelmillslab.pdf}
|
|
\caption{\label{subfig:inc-obj-gecode-steelmillslab}\gls{gecode}}
|
|
\end{subfigure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_obj_chuffed_steelmillslab.pdf}
|
|
\caption{\label{subfig:inc-obj-chuffed-steelmillslab}\gls{chuffed}}
|
|
\end{subfigure}
|
|
\caption{\label{fig:inc-obj-steelmillslab}Steel mill slab: integral of cumulative objective value of solving 5 \instances{}.}
|
|
\end{figure}
|
|
|
|
\Cref{subfig:inc-obj-gecode-steelmillslab} again only show minimal overhead for the \gls{rbmo} compared to replaying the \glspl{neighbourhood}.
|
|
For this problem a solution with zero wastage is always optimal.
|
|
As such the \gls{lns} approaches are sometimes able to prove a \gls{sol} is optimal and might finish before the time out.
|
|
This is the case for \gls{chuffed} instances, where almost all \instances{} are solved using the \gls{rbmo} method.
|
|
As expected, the \gls{lns} approaches find better solutions quicker for \gls{gecode}.
|
|
However, We do see that, given enough time, baseline \gls{gecode} will eventually find better \glspl{sol}.
|
|
|
|
% RCPSP/wet
|
|
\subsubsection{RCPSP/wet}
|
|
|
|
The \gls{rcpsp} with Weighted Earliness and Tardiness cost, is a classic scheduling problem in which tasks need to be scheduled subject to precedence \constraints{} and cumulative resource restrictions.
|
|
The objective is to find an optimal schedule that minimises the weighted cost of the earliness and tardiness for tasks that are not completed by their proposed deadline.
|
|
The \variables{} in \gls{array} \mzninline{s} represent the start times of each task in the model.
|
|
\Cref{lst:inc-free-timeslot} shows our structured \gls{neighbourhood} for this model.
|
|
It randomly selects a time interval of one-tenth the length of the planning horizon and frees all tasks starting in that time interval, which allows a reshuffling of these tasks.
|
|
|
|
\begin{listing}
|
|
\mznfile{assets/listing/inc_rcpsp_neighbourhood.mzn}
|
|
\caption{\label{lst:inc-free-timeslot}\Gls{rcpsp}/wet: \gls{neighbourhood} freeing
|
|
all tasks starting in the drawn interval.}
|
|
\end{listing}
|
|
|
|
\begin{figure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_obj_gecode_rcpspw.pdf}
|
|
\caption{\label{subfig:inc-obj-gecode-rcpspw}\gls{gecode}}
|
|
\end{subfigure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_obj_chuffed_rcpspw.pdf}
|
|
\caption{\label{subfig:inc-obj-chuffed-rcpspw}\gls{chuffed}}
|
|
\end{subfigure}
|
|
\caption{\label{fig:inc-obj-rcpspw}\gls{rcpsp}/wet: integral of cumulative objective value of solving 5 \instances{}.}
|
|
\end{figure}
|
|
|
|
\Cref{fig:inc-obj-rcpspw} shows once more that \gls{rbmo} has little overhead over the replaying approach.
|
|
These approaches perform significantly better than the baseline \gls{gecode} \solver{}.
|
|
The baseline \gls{chuffed} \solver{} performs well and improves its \gls{sol} longer than the baseline \gls{gecode} \solver{}.
|
|
However, \gls{lns} makes \gls{chuffed} much more robust.
|
|
|
|
\subsection{Incremental Rewriting and Solving}
|
|
|
|
To demonstrate the advantage that dedicated incremental methods for \minizinc{} offer, we present a runtime evaluation of two \gls{meta-optimization} algorithms implemented using the two methods presented in this chapter.
|
|
We consider the use of lexicographic search and round-robin \gls{lns}, each evaluated on a different \minizinc{} model.
|
|
We compare our methods against a naive system that repeatedly programmatically changes an \instance{}, rewrites the full \instances{}, and starts a new \solver{} instance.
|
|
The solving in our tests is performed by the \gls{gecode} \gls{solver}.
|
|
When using our incremental interface, it is connected using the fully incremental \gls{api}.
|
|
The naive system and incremental interface use our architecture prototype to rewrite the \instances{}.
|
|
The \gls{rbmo} \instances{} are rewritten using the \minizinc{} 2.5.5 \compiler{}, adjusted to rewrite \gls{rbmo} specifications.
|
|
We compare the time that is required to (repeatedly) rewrite an \instance{} and the time taken by the \solver{}.
|
|
Each test is performed 10 times and the average time is shown.
|
|
|
|
\begin{figure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_cmp_lex.pdf}
|
|
\caption{\label{subfig:inc-cmp-lex}Radiation}
|
|
\end{subfigure}
|
|
\begin{subfigure}[b]{0.49\linewidth}
|
|
\includegraphics[width=\columnwidth]{assets/img/inc_cmp_lns.pdf}
|
|
\caption{\label{subfig:inc-cmp-lns}GBAC}
|
|
\end{subfigure}
|
|
\caption{\label{fig:inc-cmp} A comparison of the two new incremental techniques and a recompilation strategy.}
|
|
\end{figure}
|
|
|
|
\subsubsection{Radiation}
|
|
|
|
Our first model is based on a problem of planning cancer radiation therapy treatment using multi-leaf collimators \autocite{baatar-2011-radiation}.
|
|
Two characteristics mark the quality of a solution: the amount of time the patient is exposed to radiation, and the number of ``shots'' or different angles the treatment requires.
|
|
However, the first characteristic is considered more important than the second.
|
|
The problem therefore has a lexicographical objective: a solution is better if it requires a strictly shorter exposure time, or the same exposure time but a lower number of ``shots''.
|
|
\minizinc{} does not support lexicographical objectives directly, but a lexicographic search can be modelled in \minizinc{}, shown in \cref{lst:inc-lex-min}.
|
|
|
|
The results are shown in \cref{subfig:inc-cmp-lex}.
|
|
They show that both incremental methods have a clear advantage over are naive baseline approach.
|
|
Although some additional time is spend \gls{rewriting} the \gls{rbmo} models compared to the incremental, it is minimal.
|
|
The \gls{rewriting} of the \gls{rbmo} \instances{} would likely take less time when using the new prototype architecture.
|
|
Between all the methods, solving time is very similar.
|
|
\gls{rbmo} seems to have a slight advantage.
|
|
No benefit can be noticed from the use of the incremental solver \gls{api}.
|
|
|
|
\subsubsection{GBAC}
|
|
|
|
We now revisit the model and method from \cref{ssubsec:inc-exp-gbac1} and compare the efficiency of using round-robin \gls{lns}.
|
|
Instead setting a time limit, we limit the number of \glspl{restart} that the \solver{} makes.
|
|
As such, we limit the number of \glspl{neighbourhood} that are computed and applied to the \instance{}.
|
|
It should be noted that the \gls{rbmo} method is not guaranteed to apply the exact same \glspl{neighbourhood}, due to the difference in random number generator.
|
|
|
|
\Cref{subfig:inc-cmp-lex} shows the results of applying 1000 \glspl{neighbourhood}.
|
|
In this case there is an even more pronounced difference between the incremental methods and the naive method.
|
|
It is surprising to see that the application of 1000 \glspl{neighbourhood} using \gls{incremental-rewriting}, still performs very similar to \gls{rewriting} the \gls{rbmo} method.
|
|
Again, solving times are similar between all method.
|
|
Once again we do not see any real advantage of the use of the incremental \solver{} \gls{api}.
|
|
The advantage in solve time using \gls{rbmo} is more pronounced here, but it is hard to draw conclusions since the \glspl{neighbourhood} of this method are not exactly the same.
|
|
|
|
\subsection{Summary}
|
|
|
|
The results of our experiments show that there is a clear benefit from the use of incremental methods in \cmls{}.
|
|
|
|
The \gls{meta-optimization} algorithms that can be applied using \gls{rbmo} show a significant improvement over the \solvers{} normal search.
|
|
It is shown this method is very efficient and does not under perform even when compared to a unrealistic version of the methods that do not require any computations.
|
|
|
|
The incremental interface offers a great alternative when \solvers{} are not extended for \gls{rbmo}.
|
|
\Gls{incremental-rewriting} saves a significant amount of time, compared to repeatedly \gls{rewriting} the full \instance{}.
|
|
In our experiments, after the initial \gls{rewriting} of the \instance{} the additional \gls{rewriting} to apply a \gls{neighbourhood} often becomes insignificant compared to the time required by the \solver{}.
|
|
However, the incremental \solver{} \gls{api}, used in this approach, did not show any benefit.
|
|
Although we are still convinced of its usefulness, it was not visible in the results of our experiments.
|
|
It might be considered whether these use cases (or its implementation) limited its effect in any way, or whether its effect will be more pronounced in other types of \solvers{}, where more information can be saved.
|