This repository has been archived on 2025-03-06. You can view files and clone it, but cannot push or open issues or pull requests.
dekker-phd-thesis/chapters/5_incremental.tex

849 lines
61 KiB
TeX

%************************************************
\chapter{Incremental Constraint Modelling}\label{ch:incremental}
%************************************************
\glsreset{rbmo}
\glsreset{incremental-rewriting}
\input{chapters/5_incremental_preamble}
\section{An Introduction to Meta-Optimization}
There are many different kinds of \gls{meta-optimization} algorithms.
They share neither a goal or an exact method.
However, they can often be described through the use of incremental \constraint{} modelling.
Each algorithm solves an \instance{}, adds new \constraints{}, the resulting \instance{} is solved again, and the \constraints{} may be removed again.
This process is repeated until the goal of the \gls{meta-optimization} algorithm is reached.
The following algorithms are examples of \gls{meta-optimization}.
\begin{description}
\item[Multi-objective search]\autocite{jones-2002-multi-objective}
Optimizing multiple \glspl{objective} is often not supported directly in solvers.
Instead, it can be achieved using a \gls{meta-optimization} approach: find a \gls{sol} to a (single-objective) problem, then add more \constraints{} to the original problem and repeat.
\item[\gls{lns}] \autocite{shaw-1998-local-search}
This \gls{meta-optimization} algorithm was first introduced as a heuristic to vehicle routing problem, but has proven to be a very successful method to quickly improve \gls{sol} quality for many types of problem.
After finding a (sub-optimal) \gls{sol} to a problem, \constraints{} are added to restrict the search in the \gls{neighbourhood} of that \gls{sol}.
When a new \gls{sol} is found, the \constraints{} are removed, and \constraints{} for a new \gls{neighbourhood} are added.
\item[Online Optimization]\autocite{jaillet-2021-online}
These techniques can be employed when the problem rapidly changes.
An \instance{} is continuously updated with new data, such as newly available jobs to be scheduled or customer requests to be processed.
\item[Diverse \Gls{sol} Search]\autocite{hebrard-2005-diverse}
Here we aim to provide a set of \glspl{sol} that are sufficiently different from each other in order to give human decision makers an overview of the possible \glspl{sol}.
Diversity can be achieved by repeatedly solving a problem instance with different \glspl{objective}.
\item[Interactive Optimization]\autocite{belin-2014-interactive}
In some scenarios it can be useful to allow a user to directly provide feedback on \glspl{sol} found by the \solver{}.
The feedback in the form of \constraints{} are added back into the problem, and a new \gls{sol} is generated.
Users may also take back some earlier feedback and explore different aspects of the problem to arrive at the best \gls{sol} that suits their needs.
\end{description}
The usage of \gls{meta-optimization} algorithms is not new to \cmls{}.
Languages such as \gls{opl} and \gls{comet} \autocite{michel-2005-comet} language, later succeeded by the \gls{obj-cp} library \autocite{hentenryck-2013-objcp}, provide convenient syntax for programmable search for their target \solvers{}.
Crucially, \gls{comet} the first system to introduce the notion of applying new \constraints{} on \gls{restart}.
A \gls{restart} happens when a solver abandons its current search efforts, returns to its initial \gls{search-space}, and begins a new exploration.
In many (\gls{cp}) \solvers{} it is possible to add new \constraints{} upon restarting that are only valid for the particular \gls{restart}.
Many \gls{meta-optimization} algorithms can be implemented using only this technique.
Even though \minizinc{} does not have explicit support for programmable search, \gls{meta-optimization} algorithms have been successfully applied to the language.
\begin{itemize}
\item \textcite{schiendorfer-2018-minibrass} use \gls{meta-optimization} algorithms to solve \minizinc{} \instances{} that have been extended to contain soft-\constraints{}.
\item \textcite{ingmar-2020-diverse} implement \gls{meta-optimization} algorithms to explore the diversity of \glspl{sol} for \minizinc{} models.
\item \textcite{ek-2020-online} extend \minizinc{} to model and solve online optimization problems using \gls{meta-optimization}.
\end{itemize}
In its most basic form, a simple scripting language is sufficient to implement these algorithms, by repeatedly \gls{rewriting} and solving the adjusted \instances{}.
For some applications the runtime required to repeatedly rewrite the \instance{} can be prohibitive.
Approaches such as \gls{search-comb} \autocite{schrijvers-2013-combinators} and \minisearch{} \autocite{rendl-2015-minisearch} have been proposed to reduce this overhead and ease the use of \gls{meta-optimization} in \minizinc{}.
However, these approaches were found hard to implement and maintain, and were never adopted into \minizinc{}
\section{Modelling of Restart Based Meta-Optimization}\label{sec:inc-modelling}
This section introduces a \minizinc{} extension that enables modellers to define \gls{meta-optimization} algorithms in \minizinc{}.
This extension is based on the constructs introduced in \minisearch{}, as summarized below.
\subsection{Meta-Optimization in MiniSearch}\label{sec:inc-minisearch}
\minisearch{} introduced a \minizinc{} extension that enables modellers to express \gls{meta-optimization} inside a \minizinc{} model.
A \gls{meta-optimization} algorithm in \minisearch{} typically solves a given \minizinc{} model, performs some calculations on the \gls{sol}, adds new \constraints{} and then solves again.
Most \gls{meta-optimization} definitions in \minisearch{} consist of two parts.
The first part is a declarative definition of any restriction to the \gls{search-space} that the \gls{meta-optimization} algorithm can apply, called a \gls{neighbourhood}.
In \minisearch{} these definitions can make use of the following function.
\begin{mzn}
function int: sol(var int: x)
\end{mzn}
It returns the value that variable \mzninline{x} was assigned to in the previous \gls{sol} (similar functions are defined for Boolean, float, and set variables).
This allows the \gls{neighbourhood} to be defined in terms of the previous \gls{sol}.
In addition, a \gls{neighbourhood} definition will typically make use of the random number generators available in \minizinc{}.
\Cref{lst:inc-lns-minisearch-pred} shows a simple random \gls{neighbourhood}.
For each decision variable \mzninline{x[i]}, it draws a random number from a uniform distribution.
If it exceeds threshold \mzninline{destr_rate}, then it posts \constraints{} forcing \mzninline{x[i]} to take the same value from previous \gls{sol}.
For example, \mzninline{uniform_neighbourhood(x, 0.2)} would result in each variable in the \gls{array} \mzninline{x} having a 20\% chance of being unconstrained, and an 80\% chance of being assigned to the value it had in the previous \gls{sol}.
\begin{listing}
\mznfile{assets/listing/inc_lns_minisearch_pred.mzn}
\caption{\label{lst:inc-lns-minisearch-pred} A simple random \gls{lns} predicate implemented in \minisearch{}.}
\end{listing}
\begin{listing}
\mznfile{assets/listing/inc_lns_minisearch.mzn}
\caption{\label{lst:inc-lns-minisearch} A simple \gls{lns} \gls{meta-optimization} implemented in \minisearch{}.}
\end{listing}
The second part of a \minisearch{} \gls{meta-optimization} is the \gls{meta-optimization} algorithm itself.
\Cref{lst:inc-lns-minisearch} shows a \minisearch{} implementation of a basic \gls{lns}, called \mzninline{lns}.
It performs a set number of iterations, each invoking the \gls{neighbourhood} predicate \mzninline{uniform_neighbourhood} in a fresh scope.
This means that the \constraints{} only affect the current loop iteration.
It then searches for a \gls{sol} (\mzninline{minimize_bab}) with a given timeout.
If the search does return a new \gls{sol}, then it commits to that \gls{sol}, and it becomes available to the \mzninline{sol} function in subsequent iterations.
The \mzninline{lns} function also posts the \constraint{} \mzninline{obj < sol(obj)}, ensuring the objective value in the next iteration is strictly better than that of the current \gls{sol}.
Although \minisearch{} enables the modeller to express declarative \glspl{neighbourhood}, the definition of the \gls{meta-optimization} algorithm is rather unintuitive and difficult to debug, leading to unwieldy code for defining even simple algorithms.
Furthermore, the \minisearch{} implementation requires either a close integration of the backend \solver{} into the \minisearch{} system, or it drives the solver through the regular text file based \flatzinc{} interface.
This leads to a significant communication overhead.
To address these two issues, we propose to keep modelling \glspl{neighbourhood} as predicates, but define \gls{meta-optimization} algorithms from an imperative perspective.
We define a few additional \minizinc{} \glspl{annotation} and functions that
\begin{itemize}
\item allow us to express important aspects of the meta-optimization algorithm in a more convenient way, and
\item enable a simple \gls{rewriting} scheme that does not require any additional communication with and only small, simple extensions of the target \solver{}.
\end{itemize}
\subsection{Restart Annotation}
Instead of the complex \minisearch{} definitions, we propose to add support for \glspl{meta-optimization} that are purely based on the notion of \glspl{restart}.
A \gls{restart} happens when a solver abandons its current search efforts, returns to its initial \gls{search-space}, and begins a new exploration.
Many \gls{cp} \solvers{} already provide support for controlling their restarting behaviour.
They can periodically restart after a certain number of nodes, or restart for every \gls{sol}.
Typically, \solvers{} also support posting additional \constraints{} upon restarting that are only valid for the particular \gls{restart}.
These \constraints{} are ``retracted'' for the next \gls{restart}.
In its simplest form, we can therefore implement \gls{lns} by merely specifying a \gls{neighbourhood} predicate.
We then annotate the \mzninline{solve} item with the following \gls{annotation} to indicate the predicate should be invoked upon each restart.
\begin{mzn}
solve ::on_restart(my_neighbourhood) minimize cost;
\end{mzn}
Note that \minizinc{} currently does not support passing functions or predicates as arguments.
Calling the predicate, as in \mzninline{::on_restart(my_neighbourhood())}, would not have the correct semantics.
The predicate needs to be called for each restart.
As a workaround, we currently pass the name of the predicate to be called for each restart as a string (see the definition of the new \mzninline{on_restart} \gls{annotation} in \cref{lst:inc-restart-ann}).
The second component of our \gls{lns} definition is the ``restarting strategy''.
It defines a pattern for the size of the \gls{search-space} used for each \gls{neighbourhood} (i.e., \gls{restart}), and also when to stop the overall search.
We propose adding new search \glspl{annotation} to \minizinc{} to control this behaviour.
The proposed \glspl{annotation} are shown in \cref{lst:inc-restart-ann}.
The \mzninline{restart_on_solution} \gls{annotation} tells the solver to restart immediately for each \gls{sol}, rather than looking for the best one in each restart.
The \mzninline{restart_without_objective} tells it not to add \gls{bnb} \constraints{} on the \gls{objective}.
The other \mzninline{restart_X}\glspl{annotation} define different strategies for restarting the search when a \gls{sol} cannot be found (these strategies have already been part of the \minizinc{} standard library since version 2.2.0).
The \mzninline{timeout} \gls{annotation} gives an overall time limit for the search.
Finally, the \mzninline{restart_limit} stops the search after a certain number of \glspl{restart}.
\begin{listing}
\mznfile{assets/listing/inc_restart_ann.mzn}
\caption{\label{lst:inc-restart-ann} New \glspl{annotation} to control the restarting behaviour.}
\end{listing}
\subsection{Advanced Meta-Optimization Algorithms}
Although using just \gls{restart} \glspl{annotation} by themselves allows us to run the basic \gls{lns}, more advanced \gls{meta-optimization} algorithms will require more than reapplying the same \gls{neighbourhood} time after time.
It is, for example, often beneficial to use several \gls{neighbourhood} definitions within the same \instance{}.
Different \glspl{neighbourhood} may be able to improve different aspects of a \gls{sol} at different phases of the search.
Adaptive \gls{lns} \autocite{ropke-2006-adaptive, pisinger-2007-heuristic} is a variant of \gls{lns} that keeps track of the \glspl{neighbourhood} that led to improvements and favours them for future iterations.
This has proven to be a highly successful approach to \gls{lns}.
However, even a simpler scheme that applies several \glspl{neighbourhood} in a round-robin fashion can be effective.
In \minisearch{}, these adaptive or round-robin approaches can be implemented using ``state variables'', which support destructive update (overwriting the value they store).
In this way, the \minisearch{} strategy can store values to be used in later iterations.
We use the ``\solver{} state'' instead.
All values are stored in normal \variables{}.
We then access them using two simple functions that reveal the solver state of the previous \gls{restart}.
This approach is sufficient for expressing many \gls{meta-optimization} algorithms, and its implementation is much simpler.
\paragraph{State access and initialization}
The state access functions are defined in \cref{lst:inc-state-access}.
Function \mzninline{status} returns the status of the previous restart, namely:
\begin{description}
\item[\mzninline{START}] the \solver{} has not been restarted yet;
\item[\mzninline{UNSAT}] the \gls{search-space} of the last \gls{restart} was \gls{unsat};
\item[\mzninline{SAT}] the last \gls{restart} found a \gls{sol};
\item[\mzninline{OPT}] the last \gls{restart} found and proved an \gls{opt-sol} in its \gls{search-space};
\item[\mzninline{UNKNOWN}] the last \gls{restart} did not find a \gls{sol}, but did not exhaust its \gls{search-space}.
\end{description}
\begin{listing}
\mznfile{assets/listing/inc_state_access.mzn}
\caption{\label{lst:inc-state-access} Functions for accessing previous \solver{} states.}
\end{listing}
The function \mzninline{last_val} allows modellers to access the last value assigned to a \variable{}.
Like \mzninline{sol}, it has versions for all basic \variable{} types.
If \mzninline{status()=SAT \/ status{}=OPT}, then \mzninline{last_val(x)} and \mzninline{sol(x)} return the same value.
However, if \mzninline{status()=UNSAT \/ status{}=UNKNOWN}, but \mzninline{x} was still (at some point) assigned during search, then that value will be returned by \mzninline{last_val(x)}, whereas \mzninline{sol(x)} will still return the value from the last \gls{sol}.
The value is undefined when \mzninline{status()=START} or when \mzninline{x} was not present during the last \gls{restart}.
If a \variable{} \mzninline{x} is assigned immediately after each restart (before any \glspl{search-decision}), then it is guaranteed that \mzninline{last_val(x)} will return this value after the next restart.
This mechanism allows us to use \variables{} similar fashion to \minisearch{}'s state variables.
In order to be able to initialize the \variables{} used for state access, we interpret \mzninline{on_restart} also before the initial search using the same semantics.
As such, the predicate is also called before the first ``real'' \gls{restart}, but any \constraint{} posted by the predicate will be retracted for the next \gls{restart}.
Of particular interest for the behaviour of \mzninline{last_val} and \mzninline{sol} is the manner in which we identify a \variable{}.
We have defined the behaviour of the \mzninline{on_restart} \gls{annotation} as repeatedly calling the function argument on every restart.
This means that if the call introduces and \glspl{avar}, then it will create unique \glspl{avar} for every restart.
It could thus be said that \mzninline{last_val} and \mzninline{sol} cannot be used on these \glspl{avar}.
However, it is often natural to introduce state variables used within our restart strategy, as we shall soon see.
Therefore, we identify each \variable{} using their variable path \autocite{leo-2015-multipass}.
The path of a \variable{} is designed to give a unique identifier to a \variable{} when an \instance{} is rewritten multiple times.
The \variable{} path is mostly dependent on the \parameter{} arguments to the calls and the value of iterators for which the \variable{} is introduced.
As such, this mechanism will between \glspl{restart} correctly identify the \variables{} for each call to the nullary function in the \mzninline{on_restart} \gls{annotation}.
However, it might lead to slightly unexpected behaviour when using \parameter{} function arguments that do not relate to the \mzninline{avar} introduced in the function.
\paragraph{Parametric neighbourhood selection predicates}
Since there are many well-known ways to makes a selection of a set of \glspl{neighbourhood}, we define standard \gls{neighbourhood} selection strategies as predicates that are parametric over the \glspl{neighbourhood} they should apply.
For example, we can define a strategy \mzninline{basic_lns} that applies a \gls{lns} \gls{neighbourhood}.
Since \mzninline{on_restart} now also includes the initial search, we apply the \gls{neighbourhood} only if the current status is not \mzninline{START}.
However, we again face the problem that in \minizinc{} we cannot pass functions or predicates as arguments.
We can however define these predicates to take Boolean \variables{}, the result of evaluating a predicate, as their arguments.
Using this technique, the \mzninline{basic_lns} predicate can be defined as follows.
\begin{mzn}
predicate basic_lns(var bool: nbh) =
if status() != START then
nbh
endif;
\end{mzn}
Still, in order to use this predicate with the \mzninline{on_restart} \gls{annotation}, we cannot simply call a \gls{neighbourhood} predicate as follows.
\begin{mzn}
basic_lns(uniform_neighbourhood(x, 0.2))
\end{mzn}
\noindent{}Calling \mzninline{uniform_neighbourhood} like this would result in a single evaluation of the predicate.
Rather, the modeller has to define their overall strategy in a new nullary predicate.
This ensures that both the selection strategy and the \glspl{neighbourhood} are evaluated on every restart.
\Cref{lst:inc-basic-complete} shows a complete example of a basic \gls{lns} model.
\Lref{line:inc:blns:strat} shows the predicate defined to capture the overall strategy.
\begin{listing}[p]
\mznfile{assets/listing/inc_basic_complete.mzn}
\caption{\label{lst:inc-basic-complete} A complete \gls{lns} example.}
\end{listing}
We can also define round-robin and adaptive strategies using these primitives.
\Cref{lst:inc-round-robin} defines a round-robin \gls{lns} selection strategy, which cycles through a list of \mzninline{N} neighbourhoods \mzninline{nbhs}.
To do this, it uses the \variable{} \mzninline{select}.
In the initialization phase (\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which means none of the \glspl{neighbourhood} is activated.
In any following \gls{restart}, \mzninline{select} is incremented modulo \mzninline{N}, by accessing the last value assigned in a previous \gls{restart}.
This will activate a different \gls{neighbourhood} for each subsequent \gls{restart} (\lref{line:6:roundrobin:post}).
\begin{listing}[p]
\mznfile[l]{assets/listing/inc_round_robin.mzn}
\caption{\label{lst:inc-round-robin} A predicate providing the round-robin selection strategy.}
\end{listing}
For adaptive \gls{lns}, a simple strategy is to change the size of the \gls{neighbourhood} depending on whether the previous size was successful or not.
\Cref{lst:inc-adaptive} shows an adaptive version of the \mzninline{uniform_neighbourhood} that increases the number of free \variables{} when the previous \gls{restart} failed, and decreases it when it succeeded, within the range \([0.6,0.95]\).
\begin{listing}[p]
\mznfile{assets/listing/inc_adaptive.mzn}
\caption{\label{lst:inc-adaptive}A simple adaptive neighbourhood.}
\end{listing}
\subsection{Optimization strategies}
The \gls{meta-optimization} algorithms we have seen so far rely on the default behaviour of \minizinc{} \solvers{} to use \gls{bnb} for optimization: when a new \gls{sol} is found, the \solver{} adds a \constraint{} to the remainder of the search to only accept better \glspl{sol}, as defined by the \gls{objective} in the \mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve} item.
When combined with \glspl{restart} and \gls{lns}, this is equivalent to a simple hill-climbing search method.
We can use the constructs introduced above to implement alternative search method such as simulated annealing.
We use the \mzninline{restart_without_objective} \gls{annotation}, in particular, to tell the solver not to add the \gls{bnb} \constraint{} on \gls{restart}.
It will still use the declared \gls{objective} to decide whether a new \gls{sol} is globally the best one seen so far, and only output those.
This maintains the convention of \minizinc{} \solvers{} that the last \gls{sol} printed at any point in time is the currently best known one.
When using \mzninline{restart_without_objective} the \solver{} is no longer able to decide when the search process is finished.
However, when \gls{rbmo} is used to describe a complete search, we still want to stop the \solver{} when the process is known to be finished.
Therefore, we introduce the following function that can be used to signal to the solver that the search is complete.
\begin{mzn}
function var bool: complete();
\end{mzn}
\noindent{}When the \variable{} returned by this function \gls{fixed} to \true{}, then the search is complete.
If any \gls{sol} was found, it is declared an \gls{opt-sol}.
With \mzninline{restart_without_objective}, the \gls{restart} predicate is now responsible for constraining the \gls{objective}.
Note that a simple hill-climbing (for minimization) can still be defined easily in this context as follows.
\begin{mzn}
predicate hill_climbing() =
if status() != START then
_objective < sol(_objective);
elseif status() = UNSAT then
complete();
endif;
\end{mzn}
It takes advantage of the fact that the declared \gls{objective} is available through the built-in \variable{} \mzninline{_objective}.
A more interesting example is a simulated annealing strategy.
When using this strategy, the sequence of \glspl{sol} that the \solver{} finds are not required to steadily improve in quality.
Instead, the \solver{} can produce \glspl{sol} that potentially with a potentially reduced objective value.
Over time, we decrease the amount by which a \gls{sol} might decrease objective value until we are only looking for improvements.
This \gls{meta-optimization} can help the \solver{} to quickly approximate the \gls{opt-sol}.
\Cref{lst:inc-sim-ann} show how this strategy is also easy to express using \gls{rbmo}.
\begin{listing}
\mznfile{assets/listing/inc_sim_ann.mzn}
\caption{\label{lst:inc-sim-ann}A predicate implementing a simulated annealing search.}
\end{listing}
Using the same methods it is also possible to describe optimization strategies with multiple \glspl{objective}.
An example of such a strategy is lexicographic search.
Lexicographic search can be employed when there is a strict order between the importance of different \glspl{objective}.
Once a \gls{sol} is found, we first consider the most important \gls{objective}.
Any subsequent \gls{sol} must improve the quality of this \gls{objective}.
If there does not exist such a \gls{sol}, then we move on to the next important \gls{objective}.
We then look for a \gls{sol} that improves the quality of this \gls{objective}, without decreasing the quality of more important \glspl{objective}.
We then repeat this process until we reach the least important \gls{objective}, and we cannot find \glspl{sol} that improve it any longer.
A predicate that implements lexicographic search is shown in \cref{lst:inc-lex-min}.
\begin{listing}
\mznfile{assets/listing/inc_lex_min.mzn}
\caption{\label{lst:inc-lex-min}A predicate implementing lexicographic search.}
\end{listing}
The lexicographic tracks the active \gls{objective} using the \mzninline{stage} \variable{}.
Initially, the stage is set to one: it tries to find an \gls{opt-sol} for the first \gls{objective}.
Whenever we find a \gls{sol} we keep the same \mzninline{stage} value, but constrain the \gls{search-space} to find a better \gls{sol}.
When there does not exist a better \gls{sol}, we increase the value of the \mzninline{stage}, but ensure the \glspl{objective} keep their optimal value.
When we reach a value for \mzninline{stage} without an associated \gls{objective}, we are guaranteed to have reached the lexicographic optimal and our search is thus complete.
There is not always a clear order of importance for different \glspl{objective} in a problem.
We can instead look for a number of diverse \glspl{sol} and allow the user to pick the most acceptable options.
The predicate in \cref{lst:inc-pareto} shows a \gls{meta-optimization} for the Pareto optimality of a pair of \glspl{objective}.
\begin{listing}
\mznfile{assets/listing/inc_pareto.mzn}
\caption{\label{lst:inc-pareto}A predicate implementing Pareto optimality search for two \glspl{objective}.}
\end{listing}
In this implementation we keep track of the number of \glspl{sol} found so far using \mzninline{nsol}.
There is a maximum number we can handle, \mzninline{ms}.
At the start the number of \glspl{sol} is zero.
If we do not find any \glspl{sol}, then we finish the entire search.
Otherwise, we record that we have one more \gls{sol}.
We store the \gls{sol} values in \mzninline{s1} and \mzninline{s2} \glspl{array}.
Before each \gls{restart} we add \constraints{} removing Pareto dominated \glspl{sol}, based on each previous \gls{sol}.
Additionally, we maintain the \glspl{sol} stored in \mzninline{s1} and \mzninline{s2} by assigning them to their previous value and avoid search on the unused spaces in the \glspl{array} by assignment them to the lower bound of their respective \gls{objective}.
It should be noted that the \glspl{sol} found by this search will still contain dominated \gls{sol}, so an additional filtering step would be needed to compute only non-dominated \glspl{sol}.
\section{Rewriting of Meta-Optimization Algorithms}\label{sec:inc-solver-extension}
The \glspl{neighbourhood} defined in the previous section can be executed with \minisearch{} by adding support for the \mzninline{status} and \mzninline{last_val} built-in functions, and by defining the main \gls{restart} loop.
The \minisearch{} evaluator will then call a \solver{} to produce a \gls{sol}, and evaluate the \gls{neighbourhood} predicate, incrementally producing new \flatzinc{} to be added to the next round of solving.
While this is a viable approach, our goal is to keep \gls{rewriting} and solving separate, by embedding the entire \gls{meta-optimization} algorithm into the \gls{slv-mod}.
This section introduces such a \gls{rewriting} approach.
It only requires simple modifications of the \gls{rewriting} process, and the produced \gls{slv-mod} can be executed by standard \gls{cp} \solvers{} with a small set of simple extensions.
\subsection{Rewriting Overview}
The \gls{neighbourhood} definitions from the previous section have an important property that makes them easy to rewrite: they are defined in terms of standard \minizinc{} expressions, except for a few new functions, such as \mzninline{sol}, \mzninline{status}, and \mzninline{last_val}.
When the \gls{neighbourhood} predicates are evaluated in the \minisearch{} way, the \minisearch{} \interpreter{} implements those functions.
It has to compute the correct value whenever a predicate is evaluated.
Instead, the \gls{rewriting} scheme presented below uses a limited form of partial evaluation: \parameters{} that are known during \gls{rewriting} will be fully evaluated; those only known during the solving, such as the result of a call to any of the new functions (\mzninline{sol}, \mzninline{status}, etc.), are replaced by \variables{}.
This essentially \textbf{turns the new functions into \gls{native} \constraints{}} for the target \solver{}.
The \gls{neighbourhood} predicate can then be added as a \constraint{} to the \gls{slv-mod}.
The evaluation is performed by hijacking the solver's own capabilities: It will automatically perform the evaluation of the new functions by propagating the \gls{native} \constraints{}.
To compile a \gls{meta-optimization} algorithms to a \gls{slv-mod}, the \gls{rewriting} process performs the following four simple steps.
\begin{enumerate}
\item It replaces the \gls{annotation} \mzninline{::on_restart("X")} with a call to predicate \mzninline{X}.
\item Inside predicate \mzninline{X} and any other predicate called recursively from \mzninline{X}: it must treat any call to functions \mzninline{sol}, \mzninline{status}, and \mzninline{last_val} as returning a \variable{}; and rename calls to random functions, e.g., \mzninline{uniform} to \mzninline{uniform_slv}, in order to distinguish them from their standard library versions.
\item It converts any expression containing a call from step 2 to be a \variable{} to ensure the functions are compiled as \constraints{}, rather than directly evaluated during \gls{rewriting}.
\item It rewrites the resulting model using an extension of the \minizinc{} standard library that provides declarations for these functions, as defined in the following subsection.
\end{enumerate}
These transformations will not change the code of many \gls{neighbourhood} definitions, since the functions are often used in positions that accept both \parameters{} and \variables{}.
For example, the \mzninline{uniform_neighbourhood} predicate from \cref{lst:inc-lns-minisearch-pred} uses \mzninline{uniform(0.0, 1.0)} in a \gls{conditional} expression, and \mzninline{sol(x[i])} in an equality \constraint{}.
\Gls{conditional} expressions and equality \constraints{} are defined for \parameters{} as well as \variables{}, so the \gls{rewriting} does not have to be modified when the functions are changed to return \variables{}.
\subsection{Rewriting the new functions}
We can rewrite \instances{} that contain the new functions by extending the \minizinc{} standard library using the following definitions.
\paragraph{\mzninline{status}}
\Cref{lst:inc-status} shows the definition of the \mzninline{status} function.
It simply replaces the functional form by a predicate \mzninline{status} (declared in \lref{line:6:status}), which constrains its local variable argument \mzninline{stat} to take the status value.
\begin{listing}
\mznfile[l]{assets/listing/inc_status.mzn}
\caption{\label{lst:inc-status} MiniZinc definition of the \mzninline{status} function.}
\end{listing}
\paragraph{\mzninline{sol} and \mzninline{last_val}}
The \mzninline{sol} function is overloaded for different types.
Our \glspl{slv-mod} do not support overloading.
Therefore, we produce type-specific \gls{native} \constraints{} for every type of \gls{native} \variable{} (e.g., \mzninline{int_sol(x, xi)}, and \mzninline{bool_sol(x, xi)}).
The resolving of the \mzninline{sol} function into these specific \gls{native} \constraints{} is done using an overloaded definition, like the one shown in \cref{lst:inc-int-sol} for integer \variables{}.
If the value of the \variable{} has become known during \gls{rewriting}, then we use its \gls{fixed} value instead.
Otherwise, the function call is rewritten into a type specific \mzninline{int_sol} \gls{native} \constraint{} that will be executed by the solver.
We use the current \domain{} of the \variable{}, \mzninline{dom(x)}, as the \domain{} the \variable{} returned by \mzninline{sol}.
\begin{listing}
\mznfile{assets/listing/inc_sol_function.mzn}
\caption{\label{lst:inc-int-sol} MiniZinc definition of the \mzninline{sol} function for integer \variables{}.}
\end{listing}
The compilation of \mzninline{last_val} is similar to that for \mzninline{sol}.
\paragraph{Random number functions}
Calls to the random number functions have been renamed by appending \texttt{\_slv}, so that they are not simply evaluated directly.
The definition of these new functions follows the same pattern as for \mzninline{sol}, \mzninline{status}, and \mzninline{last_val}.
The \minizinc{} definition of the \mzninline{uniform_slv} function is shown in \Cref{lst:inc-int-rnd}\footnote{Random number functions need to be marked as \mzninline{::impure} for the compiler not to apply \gls{cse} when they are called multiple times with the same arguments.}.
Note that the function accepts \variable{} as its arguments \mzninline{l} and \mzninline{u}.
As such, it can be used in combination with other functions, such as \mzninline{sol}.
\begin{listing}
\mznfile{assets/listing/inc_uniform_slv.mzn}
\caption{\label{lst:inc-int-rnd} MiniZinc definition of the floating point \mzninline{uniform_slv} function.}
\end{listing}
\paragraph{\mzninline{complete}}
The implementation of the \mzninline{complete} function should merely always return the same Boolean \variable{}.
However, since functions returning Boolean \variables{} are equivalent to a predicate, it is actually the \gls{reif} of \mzninline{complete} that gets used.
As such, we can define \mzninline{complete} as shown in \cref{lst:inc-complete}.
Note that the \gls{slv-mod} will thus include the \mzninline{complete_reif} \gls{native} \constraint{}.
\begin{listing}
\mznfile{assets/listing/inc_complete.mzn}
\caption{\label{lst:inc-complete} MiniZinc definition of the \mzninline{complete} function.}
\end{listing}
\subsection{Solver Support for Restart Based Native Constraints}
We will now show the minimal extensions required from a \solver{} to interpret the new \gls{native} \constraints{} and, consequently, to execute \gls{meta-optimization} definitions expressed in \minizinc{}.
For a \gls{cp} \solver{} this typically means that it must not propagate these \constraints{} in the root node of the search.
First, the \solver{} needs to parse and support the \gls{restart} \glspl{annotation} of \cref{lst:inc-restart-ann}.
Many \solvers{} already support all this functionality.
Second, the \solver{} needs to be able to parse the new \constraints{} \mzninline{status}, and all versions of \mzninline{sol}, \mzninline{last_val}, random number functions like \mzninline{float_uniform}, and \mzninline{complete}.
During solving, the \solver{} needs to execute the new constraints as follows.
\begin{description}
\item[\mzninline{status(s)}] Record the status of the previous \gls{restart}, and fix \mzninline{s} to the recorded status.
\item[\mzninline{sol(x, sx)} (variants)] Constrain \mzninline{sx} to be equal to the value of \mzninline{x} in the incumbent \gls{sol}.
If a \gls{sol} has not yet been found, then it does not constrain \mzninline{sx}.
\item[\mzninline{last_val(x, lx)} (variants)] Constrain \mzninline{lx} to take the last value assigned to \mzninline{x} during search.
If \mzninline{x} was never assigned, then it does not constraint \mzninline{lx}.
Note that many solvers (in particular \gls{sat} and \gls{lcg} solvers) already track \mzninline{last_val} for their \variables{} for use in search.
To support \gls{rbmo} a \solver{} must at least track the last value of each of the \variables{} involved in such a \constraint{}.
This is straightforward by using a \mzninline{last_val} \gls{propagator}.
It wakes up whenever its argument is \gls{fixed}, and updates the last value (a non-\gls{backtrack}able value).
\item[Random number functions] Fix their \variable{} argument to a random number in the appropriate probability distribution.
\item[\mzninline{complete_reif(c)}] Stop and mark the search as ``complete'' when its \variable{} argument is forced to be \true{}.
\end{description}
\noindent{}Importantly, these \constraints{} need to be propagated in a way that their effects can be undone for the next \gls{restart}.
Modifying a solver to support this functionality is straightforward if it already has a mechanism for posting \constraints{} during restarts.
We have implemented these extensions for both \gls{gecode} (214 new lines of code) and \gls{chuffed} (173 new lines of code).
For example, consider the model from \cref{lst:inc-basic-complete} again that uses the following \gls{meta-optimization} algorithm.
\begin{mzn}
basic_lns(uniform_neighbourhood(x, 0.2))
\end{mzn}
\Cref{lst:inc-flat-pred} shows a part of the resulting \gls{slv-mod}.
\Lrefrange{line:6:status:start}{line:6:status:end} define a Boolean variable \mzninline{b1} that takes the value \true{} if-and-only-if the status is not \mzninline{START}.
The second block of code, \lrefrange{line:6:x1:start}{line:6:x1:end}, represents the \gls{decomp} of the following expression.
\begin{mzn}
(status() != START /\ uniform(0.0, 1.0) > 0.2) -> x[1] = sol(x[1])
\end{mzn}
It is the result of merging the implication from the \mzninline{basic_lns} predicate with the \gls{conditional} expression from \mzninline{uniformNeighbourhood}.
The code first introduces and constrains a variable for the random number, then introduces two Boolean \variables{}: \mzninline{b2} is constrained to take the value \true{} if-and-only-if the random number is greater than 0.2; while \mzninline{b3} is the conjunction of \mzninline{b1} and \mzninline{b2}.
\Lref{line:6:x1:sol} constrains \mzninline{x1} to be the value of \mzninline{x[1]} in the previous \gls{sol}.
Finally, the \gls{half-reified} \constraint{} in \lref{line:6:x1:end} implements the following expression.
\begin{mzn}
b3 -> x[1] = sol(x[1])
\end{mzn}
We have omitted the similar code generated for \mzninline{x[2]} to \mzninline{x[n]}.
Note that the \flatzinc{} \gls{slv-mod} shown here has been simplified for presentation.
\begin{listing}
\mznfile[l]{assets/listing/inc_basic_complete_transformed.mzn}
\caption{\label{lst:inc-flat-pred} The \flatzinc{} \gls{slv-mod} that results from \gls{rewriting} \\
\mzninline{basic_lns(uniformNeighbourhood(x,0.2))}.}
\end{listing}
The first time the \solver{} is invoked, it sets \mzninline{s} to one (\mzninline{START}).
\Gls{propagation} will fix \mzninline{b1} to \false{} and \mzninline{b3} to \false{}.
Therefore, the implication in \lref{line:6:x1:end} is not activated, leaving \mzninline{x[1]} unconstrained.
The \gls{neighbourhood} \constraints{} are effectively switched off.
When the \solver{} \glspl{restart}, all the special \glspl{propagator} are re-executed.
Now \mzninline{s} is not one, and \mzninline{b1} is \gls{fixed} to \true{}.
The \mzninline{float_random} \gls{propagator} assigns \mzninline{rnd1} a new random value and, depending on whether it is greater than \mzninline{0.2}, the Boolean \variables{} \mzninline{b2}, and consequently \mzninline{b3} are assigned.
If \mzninline{b3} is set to \true{}, the \constraint{} in \lref{line:6:x1:end} becomes active and assigns \mzninline{x[1]} to its value in the previous \gls{sol}.
Furthermore, it is not strictly necessary to guard \mzninline{int_uniform} against being invoked before the first \gls{sol} is found.
The \mzninline{sol} \constraints{} will simply not propagate anything until the first \gls{sol} is recorded, but we use this simple example to illustrate how these Boolean conditions are rewritten and evaluated.
In \cref{sec:inc-experiments} we assess the performance of \gls{rbmo}, which can be used when supported by the target \solver{}.
In the next section, we discuss how \gls{meta-optimization} can still be efficiently applied even without an extended \solver{}, through the use of \gls{incremental-rewriting}.
\section{An Incremental Constraint Modelling Interface}%
\label{sec:inc-incremental-compilation}
A common approach to implementing \gls{meta-optimization} algorithms using \cmls{} is through \gls{incremental-rewriting}.
We allow an \gls{meta-optimization} algorithm, implemented in \minisearch{} or a scripting language, to make incremental changes to an \instance{} of a \cmodel{}.
To solve these changing \instances{}, they have to be rewritten repeatedly to \glspl{slv-mod}.
In this section we extend our architecture with an incremental constraint modelling interface that accommodates \gls{incremental-rewriting}.
Changes that are made using this interface will require only minimal additional \gls{rewriting}.
The \microzinc{} \interpreter{} is extended to be able to \textbf{add} and \textbf{remove} \nanozinc{} \constraints{} and \variables{} from/to an existing \nanozinc{} program.
Adding new \constraints{} and \variables{} is straightforward.
\nanozinc{} is already processed one item at a time, in any order.
The new items can be added to the program, and the \gls{rewriting} can proceed as normal.
Removing an item, however, is not so simple.
A \variable{} can only be removed if it is not used in any \constraints{}.
And, when we remove a \constraint{}, all effects the \constraint{} had on the \nanozinc{} program have to be undone.
This includes results of \gls{propagation}, \gls{cse} and other simplifications.
\begin{example}\label{ex:inc-incremental}
Consider the following \minizinc\ fragment:
\begin{mzn}
constraint x < 10;
constraint y < x;
\end{mzn}
After evaluating the first \constraint{}, the \domain{} of \mzninline{x} is changed to be less than ten.
Evaluating the second \constraint{} causes the \domain{} of \mzninline{y} to be less than nine.
If we now, however, try to remove the first \constraint{}, then it is not just the direct inference on the \domain{} of \mzninline{x} that has to be undone.
All consequent effects, in this case, the changes to the \domain{} of \mzninline{y}, also have to be reversed.
\end{example}
The addition of \constraints{} that can later be removed is similar to making \glspl{search-decision} in a \gls{cp} \solver{} (see \cref{subsec:back-cp}).
As such, we can support the removal of \constraints{} in reverse chronological order by \gls{backtrack}ing.
The \microzinc{} \interpreter{} maintains a \gls{trail} of the changes to be undone when a \constraint{} is removed.
The \gls{trail} records all changes to the \nanozinc{} program:
\begin{itemize}
\item the addition or removal of new \variables{} or \constraints{},
\item changes made to the \domains{} of \variables{},
\item additions to the \gls{cse} table, and
\item substitutions made due to equality \gls{propagation}.
\end{itemize}
These changes can be caused by the \gls{rewriting} of a \constraint{}, \gls{propagation}, or \gls{cse}.
When a \constraint{} is removed, its corresponding changes are undone by reversing all actions recorded on the \gls{trail} up to the point where the \constraint{} was added.
In order to limit the amount of \gls{trail}ing required, the programmer must create explicit ``choice points'' to which the system state can be reset.
In particular, this means that until the first choice point is created \gls{rewriting} can be performed without any \gls{trail}ing.
\begin{example}\label{ex:inc-trail}
Let us once again consider the following \minizinc{} fragment from \cref{ex:rew-absreif}.
\begin{mzn}
constraint abs(x) > y \/ c;
constraint c;
\end{mzn}
After \gls{rewriting}, it results in the following \nanozinc{} program.
\begin{nzn}
var true: c;
var -10..10: x;
var -10..10: y;
\end{nzn}
Assume that we added a choice point before posting the \constraint{} \mzninline{c}.
Then the \gls{trail} stores the inverse of all modifications that were made to the \nanozinc{} as a result of \mzninline{c}.
The following code fragment illustrates the elements in this \gls{trail}.
The reversal actions are annotated on the different elements.
The \mzninline{attach} and \mzninline{detach} actions signal the \interpreter{} to attach/detach a \constraint{} to the argument \variable{}, or the global \cmodel{} in case the argument is the value \true{}.
The \mzninline{add} action adds a \variable{} to the \nanozinc{} program.
Finally, the \mzninline{restore_domain} action replaces the variable declaration in the \nanozinc{} program.
This restores the \domain{} to its previous value.
\begin{nzn}
% Posted c
constraint c ::detach(true);
% Propagated c = true
var bool: c ::restore_domain;
constraint c ::attach(true);
% Simplified bool_or(b1, true) = true
constraint bool(b1, c) ::attach(true);
% b became unused...
constraint int_gt_reif(z, y, b) ::attach(b);
var bool: b ::add;
% then z became unused
constraint int_abs(x, z) ::attach(z);
var int: z ::add;
\end{nzn}
To reconstruct the \nanozinc{} program at the choice point, we simply apply the changes recorded in the \gls{trail}, in reverse order.
\end{example}
\subsection{Incremental Solving}
Ideally, the incremental changes made by the \interpreter{} would also be applied incrementally to the \solver{}.
This requires the \solver{} to support both the dynamic addition and removal of \variables{} and \constraints{}.
While some solvers can support this functionality, most \solvers{} have limitations.
We therefore define the following three interfaces, using which we can apply the changes to the \solvers{}.
\begin{itemize}
\item Using a non-incremental interface, the \solver{} is reinitialized with the updated \nanozinc\ program every time.
In this case, we still get a performance benefit from the improved \gls{rewriting} time, but not from incremental solving.
\item Using a \textit{warm-starting} interface, the \solver{} is reinitialized with the updated \gls{slv-mod} as above, but it is also given a previous \gls{sol} to initialize some internal data structures.
In particular for mathematical programming \solvers{}, this can result in dramatic performance gains compared to ``cold-starting'' the \solver{} every time.
\item Using a fully incremental interface, the \solver{} is instructed to apply the changes made by the \interpreter{}.
In this case, the \gls{trail} data structure is used to compute the set of \nanozinc{} changes since the last choice point.
\end{itemize}
\section{Experiments}\label{sec:inc-experiments}
In this section we present two experiments to test the efficiency and potency of the incremental methods introduced in this chapter.
Our first experiment considers the effectiveness of \gls{meta-optimization} during solving.
In particular, we investigate a round-robin \gls{lns} implemented using \gls{rbmo}.
On three different \minizinc{} models we compare this approach with solving the \instances{} directly using two different \solvers{}.
For one of the \solvers{}, we also compare with an oracle approach that can directly apply the exact same \gls{neighbourhood} as our \gls{rbmo}, without the need for computation.
As such, we show that the use of \gls{rbmo} introduces an insignificant computational overhead.
Our second experiment evaluates the performance of using \minizinc{} incrementally.
We compare our two methods, \gls{rbmo} and the incremental constraint modelling interface, against the baseline of continuously \gls{rewriting} and reinitializing the \solver{}.
For this comparison, we analyse the time that is required to (repeatedly) rewrite an \instance{} and the time required by the \solver{} by different methods.
The first model contains a lexicographic objective.
The second model is shared between the two experiments and uses a round-robin \gls{lns} approach.
We compare the application of a set number of \glspl{neighbourhood}.
A description of the used computational environment, \minizinc{} instances, and versioned software has been included in \cref{ch:benchmarks}.
\subsection{Meta-Optimization Solving}
We will now show that a solver that evaluates the rewritten \gls{meta-optimization} specifications can (a) be effective and (b) incur only a small overhead compared to a dedicated implementation of the \glspl{neighbourhood}.
To measure the overhead, we implemented \gls{rbmo} in \gls{gecode} \autocite{schulte-2019-gecode}.
The resulting solver has been instrumented to also output the \domains{} of all \variables{} after propagating the new \gls{native} \constraints{}.
We implemented another extension to \gls{gecode} that simply reads the stream of \variable{} \domains{} for each \gls{restart}, essentially ``replaying'' the \gls{meta-optimization} without incurring any overhead for evaluating the \glspl{neighbourhood} or handling the additional \variables{} and \constraints{}.
Note that this is a conservative estimate of the overhead: this extension has to perform less work than any real \gls{meta-optimization} implementation.
In addition, we also present benchmark results for the standard release of \gls{gecode}; as well as the development version of \gls{chuffed}; and \gls{chuffed} performing \gls{rbmo}.
These experiments illustrate that the \gls{meta-optimization} implementations indeed perform well compared to the standard \solvers{}.
We ran experiments for three models from the MiniZinc Challenge \autocite{stuckey-2010-challenge, stuckey-2014-challenge} (\texttt{gbac}, \texttt{steelmillslab}, and \texttt{rcpsp-wet}).
The \instances{} are rewritten using the \minizinc{} 2.5.5 \compiler{}, adjusted to rewrite \gls{rbmo} specifications.
For each model we use a round-robin \gls{lns} approach, shown in \cref{lst:inc-round-robin}, of two \glspl{neighbourhood}: a \gls{neighbourhood} that destroys 20\% of the main \variables{} (\cref{lst:inc-lns-minisearch-pred}) and a structured \gls{neighbourhood} for the model (described below).
For each solving method we show the integral of the cumulative objective values for all \instances{} of the model.
The underlying search strategy used is the one defined through the \gls{search-heuristic} \glspl{annotation} in the original model.
The restart strategy uses the following \glspl{annotation}.
\begin{mzn}
::restart_constant(250) ::restart_on_solution
\end{mzn}
The benchmarks are repeated with 10 different random seeds and the average is shown.
The overall time out for each run is 120 seconds.
\subsubsection{GBAC}%
\label{ssubsec:inc-exp-gbac1}
The \gls{gbac} problem comprises courses having a specified number of credits and lasting a certain number of periods, load limits of courses for each period, prerequisites for courses, and preferences of teaching periods for professors.
A detailed description of the problem is given in \autocite{chiarandini-2012-gbac}.
The main decisions are to assign courses to periods, which is done via the \variables{} \mzninline{period_of} in the \minizinc{} model.
\cref{lst:inc-free-period} shows the neighbourhood chosen, which fixes the periods of all courses according to the previous \gls{sol}, except for one randomly selected period.
\begin{listing}[b]
\mznfile{assets/listing/inc_gbac_neighbourhood.mzn}
\caption{\label{lst:inc-free-period}\gls{gbac}: \gls{neighbourhood} freeing all courses in a period.}
\end{listing}
\begin{figure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_obj_gecode_gbac.pdf}
\caption{\label{subfig:inc-obj-gecode-gbac}\gls{gecode}}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_obj_chuffed_gbac.pdf}
\caption{\label{subfig:inc-obj-chuffed-gbac}\gls{chuffed}}
\end{subfigure}
\caption{\label{fig:inc-obj-gbac}\gls{gbac}: integral of cumulative objective value of solving 5 \instances{}.}
\end{figure}
The results for the \gls{gbac} in \cref{fig:inc-obj-gbac} show that the overhead introduced by \gls{gecode} using \gls{rbmo} compared to replaying the \glspl{neighbourhood} is quite low.
The lines in the graph do not show any significant differences or delays.
Both their results are much better than the baseline \gls{gecode} \solver{}.
Since learning is not very effective for \gls{gbac}, the performance of \gls{chuffed} is similar to \gls{gecode}.
The use of \gls{lns} again significantly improves over standard \gls{chuffed}.
\subsubsection{Steel Mill Slab}
The steel mill slab design problem consists of cutting slabs into smaller ones, so that all orders are fulfilled while minimizing the wastage.
The steel mill only produces slabs of certain sizes, and orders have both a size and a colour.
We have to assign orders to slabs, with at most two different colours on each slab.
The model uses the \variables{} \mzninline{assign} for deciding which order is assigned to which slab.
\Cref{lst:inc-free-bin} shows a structured \gls{neighbourhood} that fixes the orders assigned to all slabs according to the previous \gls{sol}, except for the orders of randomly selected slab.
These orders can then be freely reassigned to any other slab.
\begin{listing}
\mznfile{assets/listing/inc_steelmillslab_neighbourhood.mzn}
\caption{\label{lst:inc-free-bin}Steel mill slab: \gls{neighbourhood} that frees all orders assigned to a selected slab.}
\end{listing}
\begin{figure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_obj_gecode_steelmillslab.pdf}
\caption{\label{subfig:inc-obj-gecode-steelmillslab}\gls{gecode}}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_obj_chuffed_steelmillslab.pdf}
\caption{\label{subfig:inc-obj-chuffed-steelmillslab}\gls{chuffed}}
\end{subfigure}
\caption{\label{fig:inc-obj-steelmillslab}Steel mill slab: integral of cumulative objective value of solving 5 \instances{}.}
\end{figure}
\Cref{subfig:inc-obj-gecode-steelmillslab} again only shows minimal overhead for the \gls{rbmo} compared to replaying the \glspl{neighbourhood}.
For this problem a \gls{sol} with zero wastage is always optimal.
As such the \gls{lns} approaches are sometimes able to prove a \gls{sol} is optimal and might finish before the time-out.
This is the case for \gls{chuffed} instances, where almost all \instances{} are solved using the \gls{rbmo} method.
As expected, the \gls{lns} approaches find better \glspl{sol} more quickly for \gls{gecode}.
However, we do see that, given enough time, baseline \gls{gecode} will eventually find better \glspl{sol}.
% RCPSP/wet
\subsubsection{RCPSP/wet}
The \gls{rcpsp} with Weighted Earliness and Tardiness cost is a classic scheduling problem in which tasks need to be scheduled subject to precedence \constraints{} and cumulative resource restrictions.
The objective is to find an optimal schedule that minimizes the weighted cost of the earliness and tardiness for tasks that are not completed by their proposed deadline.
The \variables{} in \gls{array} \mzninline{s} represent the start times of each task in the model.
\Cref{lst:inc-free-timeslot} shows our structured \gls{neighbourhood} for this model.
It fixes all tasks in the time interval to the previous solution, except for tasks scheduled in one-tenth the length of the planning horizon that is randomly selected.
Tasks scheduled in this period can be reshuffled.
\begin{listing}
\mznfile{assets/listing/inc_rcpsp_neighbourhood.mzn}
\caption{\label{lst:inc-free-timeslot}\Gls{rcpsp}/wet: \gls{neighbourhood} freeing all tasks starting in the drawn interval.}
\end{listing}
\begin{figure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_obj_gecode_rcpspw.pdf}
\caption{\label{subfig:inc-obj-gecode-rcpspw}\gls{gecode}}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_obj_chuffed_rcpspw.pdf}
\caption{\label{subfig:inc-obj-chuffed-rcpspw}\gls{chuffed}}
\end{subfigure}
\caption{\label{fig:inc-obj-rcpspw}\gls{rcpsp}/wet: integral of cumulative objective value of solving 5 \instances{}.}
\end{figure}
\Cref{fig:inc-obj-rcpspw} shows once more that \gls{rbmo} has little overhead over the replaying approach.
These approaches perform significantly better than the baseline \gls{gecode} \solver{}.
The baseline \gls{chuffed} \solver{} performs well and improves its \gls{sol} longer than the baseline \gls{gecode} \solver{}.
However, \gls{lns} makes \gls{chuffed} much more robust.
\subsection{Incremental Rewriting and Solving}
To demonstrate the advantage that dedicated incremental methods for \minizinc{} offer, we present a runtime evaluation of two \gls{meta-optimization} algorithms implemented using the two methods presented in this chapter.
We consider the use of lexicographic search and round-robin \gls{lns}, each evaluated on a different \minizinc{} model.
We compare our methods against a naive system that repeatedly programmatically changes an \instance{}, rewrites the full \instance{}, and starts a new \solver{} process each time.
The solving in our tests is performed by the \gls{gecode} \gls{solver}.
When using our incremental interface, it is connected using the fully incremental \gls{api}.
The naive system and incremental interface use our architecture prototype to rewrite the \instances{}.
Like the previous experiment, the \gls{rbmo} \instances{} are rewritten using the \minizinc{} 2.5.5 \compiler{}, adjusted to rewrite \gls{rbmo} specifications.
We compare the time that is required to (repeatedly) rewrite an \instance{} and the time taken by the \solver{}.
Each test is performed 10 times and the average time is shown.
\begin{figure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_cmp_lex.pdf}
\caption{\label{subfig:inc-cmp-lex}Radiation}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\columnwidth]{assets/img/inc_cmp_lns.pdf}
\caption{\label{subfig:inc-cmp-lns}GBAC}
\end{subfigure}
\caption{\label{fig:inc-cmp} A comparison of the two new incremental techniques and a recompilation strategy.}
\end{figure}
\subsubsection{Radiation}
Our first model is based on a problem of planning cancer radiation therapy treatment using multi-leaf collimators \autocite{baatar-2011-radiation}.
Two characteristics mark the quality of a \gls{sol}: the amount of time the patient is exposed to radiation, and the number of ``shots'' or different angles the treatment requires.
However, the first characteristic is considered more important than the second.
The problem therefore has a lexicographic objective: a \gls{sol} is better if it requires a strictly shorter exposure time, or the same exposure time but a lower number of ``shots''.
\minizinc{} does not support lexicographic objectives directly, but a lexicographic search can be modelled in \minizinc{}, shown in \cref{lst:inc-lex-min}.
The results appear in \cref{subfig:inc-cmp-lex}.
They show that both incremental methods have a clear advantage over our naive baseline approach.
Although some additional time is spent \gls{rewriting} the \gls{rbmo} models compared to the incremental, it is minimal.
The \gls{rewriting} of the \gls{rbmo} \instances{} would likely take less time when using the new prototype architecture.
Between all the methods, solving time is very similar, but \gls{rbmo} seems to have a slight advantage.
We do not notice any benefit from the use of the incremental solver \gls{api}.
\subsubsection{GBAC}
We now revisit the model and method from \cref{ssubsec:inc-exp-gbac1} and compare the efficiency of using round-robin \gls{lns}.
Instead of setting a time limit, we limit the number of \glspl{restart} that the \solver{} makes.
As such, we limit the number of \glspl{neighbourhood} that are computed and applied to the \instance{}.
It should be noted that the \gls{rbmo} method is not guaranteed to apply the exact same \glspl{neighbourhood}, due to the difference caused by random number generation.
\Cref{subfig:inc-cmp-lex} shows the results of applying 1000 \glspl{neighbourhood}.
In this case there is an even more pronounced difference between the incremental methods and the naive method.
It is surprising to see that the application of 1000 \glspl{neighbourhood} using \gls{incremental-rewriting} still performs very similar to \gls{rewriting} the \gls{rbmo} method.
The solving times are similar between all methods and once again, we do not see any real advantage of the use of the incremental \solver{} \gls{api}.
The advantage in solve time using \gls{rbmo} is more pronounced here, but it is hard to draw conclusions since the \glspl{neighbourhood} of this method are not exactly the same.
\section{Summary}
This chapter presented two novel methods that strive towards eliminating the overhead of \cmls{} for the use of \gls{meta-optimization}.
Primarily, we introduce \gls{rbmo}, a method that allows modellers to describe \gls{meta-optimization} algorithms in \minizinc{} as applying a function on every restart.
This method is intuitive and can be used to describe many well-known \gls{meta-optimization} techniques.
Crucially, the \gls{rbmo} definition created by the modeller can be rewritten into the \gls{slv-mod}.
Although new \solver{} functionality is required, this method eliminates the need for repeated \gls{rewriting}.
We have shown that this method is very efficient, even when compared to an unrealistic ``oracle'' approach that does not require any computations.
When required, however, the architecture presented in this thesis can efficiently perform \gls{incremental-rewriting}.
The architecture provides an incremental interface that allows the modeller to add and later retract \variables{} and \constraints{} in chronological order.
Internally, this \gls{rewriting} process is extended to maintain a \gls{trail} to be able to retract \constraints{} and minimize the portion of the \instance{} to be rewritten again.
\Gls{incremental-rewriting} is shown to save a significant amount of time, compared to repeatedly \gls{rewriting} the full \instance{}.
We also discussed how incremental changing \gls{slv-mod} can be communicated to the \solver{} at different levels, but we could not show a clear benefit of the incremental communication.
The next and final chapter presents the conclusions of this thesis, which reiterate the discoveries and contributions of this research to theory and practice, comment on the scope and limitations of the presented architecture, and present further avenues for research in this area.