Work on the incremental chapter

This commit is contained in:
Jip J. Dekker 2021-07-19 16:48:03 +10:00
parent eda9a60711
commit 81a5efb646
No known key found for this signature in database
GPG Key ID: 517DF4A00618C9C3
8 changed files with 332 additions and 276 deletions

View File

@ -6,6 +6,6 @@ var int: cost; % objective function
% The user-defined LNS strategy
predicate my_lns() = basic_lns(uniform_neighbourhood(x, 0.2));
% Solve using my\_lns, restart every 500 nodes, overall timeout 120 seconds
% Solve using my_lns, restart every 500 nodes, overall timeout 120 seconds
solve ::on_restart("my_lns") ::restart_constant(500) ::timeout(120)
minimize cost;

View File

@ -10,7 +10,7 @@ constraint float_gt_reif(rnd1,0.2,b2);
var bool: b3;
constraint bool_and(b1,b2,b3);
var 1..3: x1;
constraint int_sol(x[1],x1); % <-- Previous Solution @\Vlabel{line:6:x1}@
constraint int_sol(x[1],x1); % <-- Previous Solution @\Vlabel{line:6:x1:sol}@
% (status() != START /\ uniform(0.0, 1.0) > 0.2) -> x[1] = sol(x[1])
constraint int_eq_imp(x[1],x1,b3); % <-- Neighbourhood Constraint @\Vlabel{line:6:x1:end}@
...

View File

@ -1,12 +1,12 @@
predicate round_robin(array[int] of var bool: nbhs) =
let {
int: N = length(nbhs);
var -1..N-1: select; % Neighbourhood selection
} in if status()=START then
select = -1
else
select = (last_val(select) + 1) mod N
endif
/\ forall(i in 1..N) (
(select = i-1) -> nbhs[i] % <-- Post Neighbourhood@\Vlabel{line:6:roundrobin:post}@
);
let {
int: N = length(nbhs);
var -1..N-1: select; % Neighbourhood selection
} in if status()=START then
select = -1
else
select = (last_val(select) + 1) mod N
endif
/\ forall(i in 1..N) (
(select = i-1) -> nbhs[i] % <-- Post Neighbourhood@\Vlabel{line:6:roundrobin:post}@
);

View File

@ -4,7 +4,7 @@ function int: sol(var int: x) =
fix(x)
else
let {
var lb(x)..ub(x): xi;
var dom(x): xi;
constraint int_sol(x,xi);
} in xi;
endif;

View File

@ -85,9 +85,9 @@
}
\newcommand{\Vlabel}[1]{\label[line]{#1}\hypertarget{#1}{}}
\newcommand{\lref}[1]{\FancyVerbLineautorefname~\hyperlink{#1}{\ref*{#1}}}
\newcommand{\Lref}[1]{\titlecap{\FancyVerbLineautorefname}~\hyperlink{#1}{\ref*{#1}}}
\newcommand{\lrefrange}[2]{\FancyVerbLineautorefname{}s~\hyperlink{#1}{\ref*{#1}}--\hyperlink{#2}{\ref*{#2}}}
\newcommand{\lref}[1]{line~\hyperlink{#1}{\ref*{#1}}}
\newcommand{\Lref}[1]{Line~\hyperlink{#1}{\ref*{#1}}}
\newcommand{\lrefrange}[2]{lines~\hyperlink{#1}{\ref*{#1}}--\hyperlink{#2}{\ref*{#2}}}
\newcommand{\Lrefrange}[2]{Lines~\hyperlink{#1}{\ref*{#1}}--\hyperlink{#2}{\ref*{#2}}}
% Proof Tree

View File

@ -584,7 +584,7 @@ A \gls{cp} \solver{} is only able to prove that the \instance{} is \gls{unsat} b
This means that some \gls{propagation} depends on the search decision.
Therefore, if the \solver{} needs to reconsider a search decision, then it must also undo all \domain{} changes that were caused by \gls{propagation} dependent on that search decision.
The most common method in \gls{cp} \solvers{} to keep track of \gls{domain} changes using a \gls{trail}.
The most common method in \gls{cp} \solvers{} to keep track of \gls{domain} changes using a \gls{trail} data structure \autocite{warren-1983-wam}.
Every \domain{} change that is made during \gls{propagation}, after the first search decision, is stored in a list.
Whenever a new search decision is made, the current position of the list is tagged.
If the \solver{} now needs to undo a search decision (\ie\ \gls{backtrack}), it reverses all changes until it reaches the change that is tagged with the search decision.

View File

@ -2,32 +2,36 @@
\chapter{Incremental Processing}\label{ch:incremental}
%************************************************
\glsreset{rbmo}
\input{chapters/5_incremental_preamble}
\section{Modelling of Restart-Based Meta-Search}\label{sec:6-modelling}
\section{Modelling of Restart Based Meta-Optimisation}\label{sec:6-modelling}
This section introduces a \minizinc{} extension that enables modellers to define \gls{meta-optimisation} algorithms in \cmls{}.
This extension is based on the construct introduced in \minisearch\ \autocite{rendl-2015-minisearch}, as summarised below.
This section introduces a \minizinc{} extension that enables modellers to define \gls{meta-optimisation} methods in \minizinc{}.
This extension is based on the construct introduced in \minisearch{} \autocite{rendl-2015-minisearch}, as summarised below.
\subsection{Meta-Search in \glsentrytext{minisearch}}\label{sec:6-minisearch}
\subsection{Meta-Optimisation in MiniSearch}\label{sec:6-minisearch}
% Most \gls{lns} literature discusses neighbourhoods in terms of ``destroying'' part of
% a solution that is later repaired. However, from a declarative modelling point
% of view, it is more natural to see neighbourhoods as adding new \constraints{} and
% variables that need to be applied to the base model, \eg\ forcing variables to
% take the same value as in the previous solution.
\minisearch{} introduced a \minizinc{} extension that enables modellers to express \gls{meta-optimisation} inside a \minizinc{} model.
A \gls{meta-optimisation} method in \minisearch{} typically solves a given \minizinc{} model, performs some calculations on the solution, adds new \constraints{} and then solves again.
\minisearch{} introduced a \minizinc{} extension that enables modellers to express meta-searches inside a \minizinc\ model.
A meta-search in \minisearch\ typically solves a given \minizinc\ model, performs some calculations on the solution, adds new \constraints{} and then solves again.
Most \gls{meta-optimisation} definitions in \minisearch{} consist of two parts.
The first part is a declarative definition of any restriction to the \gls{search-space} that the \gls{meta-optimisation} method can apply, called a \gls{neighbourhood}.
In \minisearch{} these definitions can make use of the following function.
Most \gls{meta-optimisation} definitions in \minisearch\ consist of two parts.
The first part is a declarative definition of any restriction to the search space that the \gls{meta-optimisation} algorithm can apply, called a \gls{neighbourhood}.
In \minisearch\ these definitions can make use of the function: \mzninline{function int: sol(var int: x)}, which returns the value that variable \mzninline{x} was assigned to in the previous solution (similar functions are defined for Boolean, float and set variables).
This allows the \gls{neighbourhood} to be defined in terms of the previous solution.
In addition, a neighbourhood predicate will typically make use of the random number generators available in the \minizinc\ standard library.
\Cref{lst:6-lns-minisearch-pred} shows a simple random neighbourhood.
For each decision variable \mzninline{x[i]}, it draws a random number from a uniform distribution and, if it exceeds threshold \mzninline{destr_rate}, posts \constraints{} forcing \mzninline{x[i]} to take the same value as in the previous solution.
For example, \mzninline{uniform_neighbourhood(x, 0.2)} would result in each variable in the \gls{array} \mzninline{x} having a 20\% chance of being unconstrained, and an 80\% chance of being assigned to the value it had in the previous solution.
\begin{mzn}
function int: sol(var int: x)
\end{mzn}
It returns the value that variable \mzninline{x} was assigned to in the previous \gls{sol} (similar functions are defined for Boolean, float and set variables).
This allows the \gls{neighbourhood} to be defined in terms of the previous \gls{sol}.
In addition, a \gls{neighbourhood} definition will typically make use of the random number generators available in \minizinc{}.
\Cref{lst:6-lns-minisearch-pred} shows a simple random \gls{neighbourhood}.
For each decision variable \mzninline{x[i]}, it draws a random number from a uniform distribution.
If it exceeds threshold \mzninline{destr_rate}, then it posts \constraints{} forcing \mzninline{x[i]} to take the same value from previous \gls{sol}.
For example, \mzninline{uniform_neighbourhood(x, 0.2)} would result in each variable in the \gls{array} \mzninline{x} having a 20\% chance of being unconstrained, and an 80\% chance of being assigned to the value it had in the previous \gls{sol}.
\begin{listing}
\mznfile{assets/listing/6_lns_minisearch_pred.mzn}
@ -39,46 +43,56 @@ For example, \mzninline{uniform_neighbourhood(x, 0.2)} would result in each vari
\caption{\label{lst:6-lns-minisearch} A simple \gls{lns} \gls{meta-optimisation} implemented in \minisearch{}}
\end{listing}
The second part of a \minisearch\ \gls{meta-optimisation} is the \gls{meta-optimisation} algorithm itself.
\Cref{lst:6-lns-minisearch} shows a basic \minisearch\ implementation of a basic \gls{lns} algorithm, called \mzninline{lns}.
It performs a fixed number of iterations, each invoking the neighbourhood predicate \mzninline{uniform_neighbourhood} in a fresh scope (so that the \constraints{} only affect the current loop iteration).
It then searches for a solution (\mzninline{minimize_bab}) with a given timeout, and if the search does return a new solution, it commits to that solution (so that it becomes available to the \mzninline{sol} function in subsequent iterations).
The second part of a \minisearch{} \gls{meta-optimisation} is the \gls{meta-optimisation} method itself.
\Cref{lst:6-lns-minisearch} shows a basic \minisearch{} implementation of a basic \gls{lns}, called \mzninline{lns}.
It performs a fixed number of iterations, each invoking the \gls{neighbourhood} predicate \mzninline{uniform_neighbourhood} in a fresh scope.
This means that the \constraints{} only affect the current loop iteration.
It then searches for a solution (\mzninline{minimize_bab}) with a given timeout.
If the search does return a new solution, then it commits to that solution and it becomes available to the \mzninline{sol} function in subsequent iterations.
The \mzninline{lns} function also posts the \constraint{} \mzninline{obj < sol(obj)}, ensuring the objective value in the next iteration is strictly better than that of the current solution.
Although \minisearch\ enables the modeller to express \glspl{neighbourhood} in a declarative way, the definition of the \gls{meta-optimisation} algorithms is rather unintuitive and difficult to debug, leading to unwieldy code for defining even simple restarting strategies.
Furthermore, the \minisearch\ implementation requires either a close integration of the backend solver into the \minisearch\ system, or it drives the solver through the regular text-file based \flatzinc\ interface, leading to a significant communication overhead.
Although \minisearch{} enables the modeller to express \glspl{neighbourhood} in a declarative way, the definition of the \gls{meta-optimisation} methods is rather unintuitive and difficult to debug, leading to unwieldy code for defining even simple methods.
Furthermore, the \minisearch{} implementation requires either a close integration of the backend \solver{} into the \minisearch{} system, or it drives the solver through the regular text file based \flatzinc{} interface.
This leads to a significant communication overhead.
To address these two issues, we propose to keep modelling neighbourhoods as predicates, but define \gls{meta-optimisation} algorithms from an imperative perspective.
To address these two issues, we propose to keep modelling \glspl{neighbourhood} as predicates, but define \gls{meta-optimisation} methods from an imperative perspective.
We define a few additional \minizinc\ built-in annotations and functions that (a) allow us to express important aspects of the meta-search in a more convenient way, and (b) enable a simple compilation scheme that requires no additional communication with and only small, simple extensions of the backend solver.
We define a few additional \minizinc{} \glspl{annotation} and functions that
% The approach we follow here is therefore to \textbf{extend \flatzinc}, such that
% the definition of neighbourhoods can be communicated to the solver together with
% the problem instance. This maintains the loose coupling of \minizinc\ and
% solver, while avoiding the costly communication and cold-starting of the
% black-box approach.
\begin{itemize}
\item allow us to express important aspects of the meta-optimisation methods in a more convenient way,
\item and enable a simple \gls{rewriting} scheme that requires no additional communication with and only small, simple extensions of the target \solver{}.
\end{itemize}
\subsection{Restart Annotation}
Instead of the complex \minisearch\ definitions, we propose to add support for \glspl{meta-optimisation} that are purely based on the notion of \glspl{restart}.
A \gls{restart} happens when a solver abandons its current search efforts, returns to the root node of the search tree, and begins a new exploration.
Many \gls{cp} solvers already provide support for controlling their restarting behaviour, \eg\ they can periodically restart after a certain number of nodes, or restart for every solution.
Typically, solvers also support posting additional \constraints{} upon restarting (\eg\ Comet \autocite{michel-2005-comet}) that are only valid for the particular \gls{restart} (\ie\ they are ``retracted'' for the next \gls{restart}).
Instead of the complex \minisearch{} definitions, we propose to add support for \glspl{meta-optimisation} that are purely based on the notion of \glspl{restart}.
A \gls{restart} happens when a solver abandons its current search efforts, returns to its initial \gls{search-space}, and begins a new exploration.
Many \gls{cp} \solvers{} already provide support for controlling their restarting behaviour.
They can periodically restart after a certain number of nodes, or restart for every \gls{sol}.
Typically, \solvers{} also support posting additional \constraints{} upon restarting that are only valid for the particular \gls{restart} (\eg{} Comet \autocite{michel-2005-comet}).
These \constraints{} are ``retracted'' for the next \gls{restart}.
In its simplest form, we can therefore implement \gls{lns} by specifying a neighbourhood predicate, and annotating the \mzninline{solve} item to indicate the predicate should be invoked upon each restart:
In its simplest form, we can therefore implement \gls{lns} by merely specifying a \gls{neighbourhood} predicate.
We then annotate the \mzninline{solve} item with the following \gls{annotation} to indicate the predicate should be invoked upon each restart.
\mzninline{solve ::on_restart(my_neighbourhood) minimize cost;}
\begin{mzn}
solve ::on_restart(my_neighbourhood) minimize cost;
\end{mzn}
Note that \minizinc\ currently does not support passing functions or predicates as arguments.
Calling the predicate, as in \mzninline{::on_restart(my_neighbourhood())}, would not have the correct semantics, since the predicate needs to be called for each restart.
Note that \minizinc{} currently does not support passing functions or predicates as arguments.
Calling the predicate, as in \mzninline{::on_restart(my_neighbourhood())}, would not have the correct semantics.
The predicate needs to be called for each restart.
As a workaround, we currently pass the name of the predicate to be called for each restart as a string (see the definition of the new \mzninline{on_restart} annotation in \cref{lst:6-restart-ann}).
The second component of our \gls{lns} definition is the ``restarting strategy'', defining how much effort the solver should put into each neighbourhood (\ie\ restart), and when to stop the overall search.
The second component of our \gls{lns} definition is the ``restarting strategy'', defining how much effort the \solver{} should put into each \gls{neighbourhood} (\ie{} \gls{restart}), and when to stop the overall search. We propose adding new search \glspl{annotation} to the \minizinc{} to control this behaviour.
The proposed \glspl{annotation} are shown in \cref{lst:6-restart-ann}.
We propose adding new search annotations to the \minizinc\ language to control this behaviour (see \cref{lst:6-restart-ann}).
The \mzninline{restart_on_solution} annotation tells the solver to restart immediately for each solution, rather than looking for the best one in each restart, while \mzninline{restart_without_objective} tells it not to add branch-and-bound \constraints{} on the objective.
The other \mzninline{restart_X} annotations define different strategies for restarting the search when no solution is found.
The \mzninline{timeout} annotation gives an overall time limit for the search, whereas \mzninline{restart_limit} stops the search after a fixed number of restarts.
The \mzninline{restart_on_solution} annotation tells the solver to restart immediately for each \gls{sol}, rather than looking for the best one in each restart.
The \mzninline{restart_without_objective} tells it not to add \gls{bnb} \constraints{} on the \gls{objective}.
The other \mzninline{restart_X} annotations define different strategies for restarting the search when no \gls{sol} is found.
The \mzninline{timeout} annotation gives an overall time limit for the search.
Finally, the \mzninline{restart_limit} stops the search after a fixed number of \glspl{restart}.
\begin{listing}
\mznfile{assets/listing/6_restart_ann.mzn}
@ -86,47 +100,65 @@ The \mzninline{timeout} annotation gives an overall time limit for the search, w
behaviour}
\end{listing}
\subsection{Advanced Meta-Search}
\subsection{Advanced Meta-Optimisation Methods}
Although using just a restart annotations by themselves allows us to run the basic \gls{lns} algorithm, more advanced \gls{meta-optimisation} algorithms will require more than reapplying the same \gls{neighbourhood} time after time.
It is, for example, often beneficial to use several \gls{neighbourhood} definitions for a problem.
Different \glspl{neighbourhood} may be able to improve different aspects of a solution, at different phases of the search.
Adaptive \gls{lns} \autocite{ropke-2006-adaptive, pisinger-2007-heuristic}, which keeps track of the \glspl{neighbourhood} that led to improvements and favours them for future iterations, is the prime example for this approach.
A simpler scheme may apply several \glspl{neighbourhood} in a round-robin fashion.
Although using just a \gls{restart} \glspl{annotation} by themselves allows us to run the basic \gls{lns}, more advanced \gls{meta-optimisation} methods will require more than reapplying the same \gls{neighbourhood} time after time.
It is, for example, often beneficial to use several \gls{neighbourhood} definitions within the same \instance{}.
Different \glspl{neighbourhood} may be able to improve different aspects of a \gls{sol} at different phases of the search.
Adaptive \gls{lns} \autocite{ropke-2006-adaptive, pisinger-2007-heuristic} is a variant of \gls{lns} that keeps track of the \glspl{neighbourhood} that led to improvements and favours them for future iterations.
This method has proven to be a highly successful approach to \gls{lns}.
However, even a simpler scheme that applies several \glspl{neighbourhood} in a round-robin fashion can be effective.
In \minisearch\, these adaptive or round-robin approaches can be implemented using ``state variables'', which support destructive update (overwriting the value they store).
In this way, the \minisearch\ strategy can store values to be used in later iterations.
We use the ``solver state'' instead, \ie\ normal decision variables, and define two simple built-in functions to access the solver state (of the previous restart).
This approach is sufficient for expressing many \gls{meta-optimisation} algorithms, and its implementation is much simpler.
In \minisearch{}, these adaptive or round-robin approaches can be implemented using ``state variables'', which support destructive update (overwriting the value they store).
In this way, the \minisearch{} strategy can store values to be used in later iterations.
We use the ``\solver{} state'' instead.
All values are stored in normal \variables{}.
We then access them using two simple functions that reveal the solver state of the previous \gls{restart}.
This approach is sufficient for expressing many \gls{meta-optimisation} methods, and its implementation is much simpler.
\paragraph{State access and initialisation}
The state access functions are defined in \cref{lst:6-state-access}.
Function \mzninline{status} returns the status of the previous restart, namely: \mzninline{START} (there has been no restart yet); \mzninline{UNSAT} (the restart failed); \mzninline{SAT} (the restart found a solution); \mzninline{OPT} (the restart found and proved an optimal solution); and \mzninline{UNKNOWN} (the restart did not fail or find a solution).
Function \mzninline{last_val} (which, like \mzninline{sol}, has versions for all basic variable types) allows modellers to access the last value assigned to a variable (the value is undefined if \mzninline{status()=START}).
Function \mzninline{status} returns the status of the previous restart, namely:
\begin{description}
\item[\mzninline{START}] there has been no \gls{restart} yet;
\item[\mzninline{UNSAT}] the \gls{search-space} of the last \gls{restart} was \gls{unsat};
\item[\mzninline{SAT}] the last \gls{restart} found a \gls{sol};
\item[\mzninline{OPT}] the last \gls{restart} found and proved an \gls{opt-sol} in its \gls{search-space};
\item[\mzninline{UNKNOWN}] the last \gls{restart} did not find a solution, but did not exhaust its \gls{search-space}.
\end{description}
\noindent{}Function \mzninline{last_val} allows modellers to access the last value assigned to a \variable{}.
The value is undefined when \mzninline{status()=START}.
Like \mzninline{sol}, it has versions for all basic \variable{} types.
\begin{listing}
\mznfile{assets/listing/6_state_access.mzn}
\caption{\label{lst:6-state-access} Functions for accessing previous solver
states}
\caption{\label{lst:6-state-access} Functions for accessing previous \solver{} states}
\end{listing}
In order to be able to initialise the variables used for state access, we reinterpret \mzninline{on_restart} so that the predicate is also called for the initial search (\ie\ before the first ``real'' restart) with the same semantics, that is, any \constraint{} posted by the predicate will be retracted for the next restart.
In order to be able to initialise the \variables{} used for state access, we interpret \mzninline{on_restart} also before initial search using the same semantics.
As such, the predicate is also called before the first ``real'' \gls{restart}.
Any \constraint{} posted by the predicate will be retracted for the next \gls{restart}.
\paragraph{Parametric neighbourhood selection predicates}
We define standard neighbourhood selection strategies as predicates that are parametric over the neighbourhoods they should apply.
For example, since \mzninline{on_restart} now also includes the initial search, we can define a strategy \mzninline{basic_lns} that applies a neighbourhood only if the current status is not \mzninline{START}:
We define standard \gls{neighbourhood} selection strategies as predicates that are parametric over the \glspl{neighbourhood} they should apply.
For example, we can define a strategy \mzninline{basic_lns} that applies an \gls{lns} \gls{neighbourhood}.
Since \mzninline{on_restart} now also includes the initial search, we apply the \gls{neighbourhood} only if the current status is not \mzninline{START}, as shown in the following predicate.
\begin{mzn}
predicate basic_lns(var bool: nbh) = (status()!=START -> nbh);
\end{mzn}
In order to use this predicate with the \mzninline{on_restart} annotation, we cannot simply pass
In order to use this predicate with the \mzninline{on_restart} annotation, we cannot simply call a \gls{neighbourhood} predicate as follows.
\begin{mzn}
basic_lns(uniform_neighbourhood(x, 0.2))
\end{mzn}
\noindent{}Calling \mzninline{uniform_neighbourhood} like that would result in a single evaluation of the predicate, since \minizinc{} employs a call-by-value evaluation strategy.
\noindent{}Calling \mzninline{uniform_neighbourhood} like this would result in a single evaluation of the predicate, since \minizinc{} employs a call-by-value evaluation strategy.
Furthermore, the \mzninline{on_restart} annotation only accepts the name of a nullary predicate.
Therefore, users have to define their overall strategy in a new predicate.
\Cref{lst:6-basic-complete} shows a complete example of a basic \gls{lns} model.
@ -138,13 +170,10 @@ Therefore, users have to define their overall strategy in a new predicate.
We can also define round-robin and adaptive strategies using these primitives.
\Cref{lst:6-round-robin} defines a round-robin \gls{lns} meta-heuristic, which cycles through a list of \mzninline{N} neighbourhoods \mzninline{nbhs}.
To do this, it uses the decision variable \mzninline{select}.
In the initialisation phase (\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which means none of the neighbourhoods is activated.
In any following restart, \mzninline{select} is incremented modulo \mzninline{N}, by accessing the last value assigned in a previous restart:
\begin{mzn}
last_val(select)
\end{mzn}
\noindent{}This will activate a different neighbourhood for each restart (\lref{line:6:roundrobin:post}).
To do this, it uses the \variable{} \mzninline{select}.
In the initialisation phase (\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which means none of the \glspl{neighbourhood} is activated.
In any following \gls{restart}, \mzninline{select} is incremented modulo \mzninline{N}, by accessing the last value assigned in a previous \gls{restart}.
This will activate a different \gls{neighbourhood} for each subsequent \gls{restart} (\lref{line:6:roundrobin:post}).
\begin{listing}
\mznfile{assets/listing/6_round_robin.mzn}
@ -154,8 +183,8 @@ In any following restart, \mzninline{select} is incremented modulo \mzninline{N}
%\paragraph{Adaptive \gls{lns}}
For adaptive \gls{lns}, a simple strategy is to change the size of the neighbourhood depending on whether the previous size was successful or not.
\Cref{lst:6-adaptive} shows an adaptive version of the \mzninline{uniform_neighbourhood} that increases the number of free variables when the previous restart failed, and decreases it when it succeeded, within the bounds \([0.6,0.95]\).
For adaptive \gls{lns}, a simple strategy is to change the size of the \gls{neighbourhood} depending on whether the previous size was successful or not.
\Cref{lst:6-adaptive} shows an adaptive version of the \mzninline{uniform_neighbourhood} that increases the number of free \variables{} when the previous \gls{restart} failed, and decreases it when it succeeded, within the bounds \([0.6,0.95]\).
\begin{listing}
\mznfile{assets/listing/6_adaptive.mzn}
@ -164,27 +193,28 @@ For adaptive \gls{lns}, a simple strategy is to change the size of the neighbour
\subsection{Optimisation strategies}
The \gls{lns} strategies we have seen so far rely on the default behaviour of \minizinc\ solvers to use a branch-and-bound method for optimisation: when a new solution is found, the solver adds a \constraint{} to the remainder of the search to only accept better solutions, as defined by the objective function in the \mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve} item.
When combined with restarts and \gls{lns}, this is equivalent to a simple hill-climbing meta-heuristic.
The \gls{meta-optimisation} methods we have seen so far rely on the default behaviour of \minizinc{} \solvers{} to use a \gls{bnb} method for optimisation: when a new \gls{sol} is found, the \solver{} adds a \constraint{} to the remainder of the search to only accept better \glspl{sol}, as defined by the \gls{objective} in the \mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve} item.
When combined with \glspl{restart} and \gls{lns}, this is equivalent to a simple hill-climbing meta-heuristic.
We can use the constructs introduced above to implement alternative meta-heuristics such as simulated annealing.
We use \mzninline{restart_without_objective}, in particular, to tell the solver not to add the branch-and-bound \constraint{} on restart.
It will still use the declared objective to decide whether a new solution is globally the best one seen so far, and only output those (to maintain the convention of \minizinc\ solvers that the last solution printed at any point in time is the currently best known one).
We use the \mzninline{restart_without_objective} \gls{annotation}, in particular, to tell the solver not to add the \gls{bnb} \constraint{} on \gls{restart}.
It will still use the declared \gls{objective} to decide whether a new solution is globally the best one seen so far, and only output those.
This maintains the convention of \minizinc{} \solvers{} that the last \gls{sol} printed at any point in time is the currently best known one.
With \mzninline{restart_without_objective}, the restart predicate is now responsible for constraining the objective function.
Note that a simple hill-climbing (for minimisation) can still be defined easily in this context as:
With \mzninline{restart_without_objective}, the \gls{restart} predicate is now responsible for constraining the \gls{objective}.
Note that a simple hill-climbing (for minimisation) can still be defined easily in this context as follows.
\begin{mzn}
predicate hill_climbing() = status() != START -> _objective < sol(_objective);
\end{mzn}
It takes advantage of the fact that the declared objective function is available through the built-in variable \mzninline{_objective}.
It takes advantage of the fact that the declared \gls{objective} is available through the built-in \variable{} \mzninline{_objective}.
A more interesting example is a simulated annealing strategy.
When using this strategy, the solutions that the solver finds are no longer required to steadily improve in quality.
Instead, we ask the solver to find a solution that is a significant improvement over the previous solution.
Over time, we decrease the amount by which we require the solution needs to improve until we are just looking for any improvements.
This \gls{meta-optimisation} can help improve the qualities of solutions quickly and thereby reaching the optimal solution quicker.
This strategy is also easy to express using our restart-based modelling:
When using this strategy, the \glspl{sol} that the \solver{} finds are no longer required to steadily improve in quality.
Instead, we ask the \solver{} to find a \gls{sol} that is a significant improvement over the previous \gls{sol}.
Over time, we decrease the amount by which we require the \gls{sol} needs to improve until we are just looking for any improvements.
This \gls{meta-optimisation} can help improve the qualities of \gls{sol} quickly and thereby reaching the \gls{opt-sol} quicker.
This strategy is also easy to express using \gls{rbmo} as follows.
\begin{mzn}
predicate simulated_annealing(float: init_temp, float: cooling_rate) =
@ -198,11 +228,13 @@ This strategy is also easy to express using our restart-based modelling:
endif;
\end{mzn}
Using the same methods it is also possible to describe optimisation strategies with multiple objectives.
\todo{introduce the complete() predicate}
Using the same methods it is also possible to describe optimisation strategies with multiple \glspl{objective}.
An example of such a strategy is lexicographic search.
Lexicographic search can be employed when there is a strict order between the importance of different variables.
It required that, once a solution is found, each subsequent solution must either improve the first objective, or have the same value for the first objective and improve the second objective, or have the same value for the first two objectives and improve the third objective, and so on.
We can model this strategy restarts as such:
Lexicographic search can be employed when there is a strict order between the importance of different \variables{}.
It required that, once a \gls{sol} is found, each subsequent \gls{sol} must either improve the first \gls{objective}, or have the same value for the first \gls{objective} and improve the second \gls{objective}, or have the same value for the first two \glspl{objective} and improve the third \gls{objective}, and so on.
We can model this strategy using \gls{rbmo} as follows.
\begin{mzn}
predicate lex_minimize(array[int] of var int: o) =
@ -216,6 +248,7 @@ predicate lex_minimize(array[int] of var int: o) =
stage = lastval(stage)
/\ o[stage] < sol(o[stage])
endif
/\ _objective = o[stage];
/\ forall(i in min(index_set(o))..stage-1) (
o[i] = sol(o[i])
)
@ -224,17 +257,17 @@ predicate lex_minimize(array[int] of var int: o) =
endif;
\end{mzn}
The lexicographic objective changes the objective at each stage in the evaluation.
Initially the stage is 1. Otherwise, is we have an unsatisfiable result, then the last stage has been completed (proved optimal).
We increase the stage by one if we have stages to go otherwise we finish.
Otherwise, if the last all was SAT we maintain the same stage, and store the objective value (for this stage) in the \mzninline{best} \gls{array}.
For normal computation we fix all the earlier stage variables to their best value.
If we are not in the first run for a stage we add the branch and bound cut to try to find better solutions.
Finally, we set the objective to be the objective for the current stage.
The lexicographic search changes the \gls{objective} at each stage in the evaluation.
Initially, the search is in stage one: it tries to find an \gls{opt-sol} for the first \gls{objective}.
Whenever we find a \gls{sol} we stay in the same stage, but constrain the \gls{search-space} to find a better \gls{sol}.
When there exists no better \gls{sol}, we move on to the next stage, trying to find the \gls{opt-sol} for the next \gls{objective} while the earlier \glspl{objective} maintain their value.
This process is repeated until reach the final \gls{objective}.
When we reach a stage without an associated \gls{objective}, we are guaranteed to have reached the lexicographic optimal.
Our search is thus complete.
There is not always a clear order of importance for different objectives in a problem.
In these cases we instead look for a number of diverse solutions and allow the user to pick the most acceptable options.
The following fragment shows a \gls{meta-optimisation} for the Pareto optimality of a pair of objectives:
There is not always a clear order of importance for different \glspl{objective} in a problem.
We can instead look for a number of diverse \glspl{sol} and allow the user to pick the most acceptable options.
The following fragment shows a \gls{meta-optimisation} for the Pareto optimality of a pair of \glspl{objective}.
\begin{mzn}
predicate pareto_optimal(var int: obj1, var int: obj2) =
@ -258,50 +291,51 @@ The following fragment shows a \gls{meta-optimisation} for the Pareto optimality
);
\end{mzn}
In this implementation we keep track of the number of solutions found so far using \mzninline{nsol}.
There is a maximum number we can handle (\mzninline{ms}).
At the start the number of solutions is 0. If we find no solutions, then we finish the entire search.
Otherwise, we record that we have one more solution.
We store the solution values in \mzninline{s1} and \mzninline{s2} \glspl{array}.
Before each restart we add \constraints{} removing Pareto dominated solutions, based on each previous solution.
In this implementation we keep track of the number of \glspl{sol} found so far using \mzninline{nsol}.
There is a maximum number we can handle, \mzninline{ms}.
At the start the number of solutions is zero.
If we find no \glspl{sol}, then we finish the entire search.
Otherwise, we record that we have one more \gls{sol}.
We store the \gls{sol} values in \mzninline{s1} and \mzninline{s2} \glspl{array}.
Before each \gls{restart} we add \constraints{} removing Pareto dominated \glspl{sol}, based on each previous \gls{sol}.
\section{Compilation of Meta-Search Specifications}\label{sec:6-solver-extension}
\section{Rewriting of Meta-Optimisation Methods}\label{sec:6-solver-extension}
The neighbourhoods defined in the previous section can be executed with \minisearch\ by adding support for the \mzninline{status} and \mzninline{last_val} built-in functions, and by defining the main restart loop.
The \minisearch{} evaluator will then call a solver to produce a solution, and evaluate the neighbourhood predicate, incrementally producing new \flatzinc\ to be added to the next round of solving.
The \glspl{neighbourhood} defined in the previous section can be executed with \minisearch{} by adding support for the \mzninline{status} and \mzninline{last_val} built-in functions, and by defining the main \gls{restart} loop.
The \minisearch{} evaluator will then call a \solver{} to produce a \gls{sol}, and evaluate the \gls{neighbourhood} predicate, incrementally producing new \flatzinc{} to be added to the next round of solving.
While this is a viable approach, our goal is to keep the compiler and solver separate, by embedding the entire \gls{lns} specification into the \flatzinc\ that is passed to the solver.
While this is a viable approach, our goal is to keep \gls{rewriting} and solving separate, by embedding the entire \gls{meta-optimisation} method into the \gls{slv-mod}.
This section introduces such a compilation approach.
It only requires simple modifications of the \minizinc\ compiler, and the compiled \flatzinc\ can be executed by standard \gls{cp} solvers with a small set of simple extensions.
This section introduces such a \gls{rewriting} approach.
It only requires simple modifications of the \gls{rewriting} process, and the produced \gls{slv-mod} can be executed by standard \gls{cp} \solvers{} with a small set of simple extensions.
\subsection{Compilation overview}
\subsection{Rewriting Overview}
The neighbourhood definitions from the previous section have an important property that makes them easy to compile to standard \flatzinc: they are defined in terms of standard \minizinc\ expressions, except for a few new built-in functions.
When the neighbourhood predicates are evaluated in the \minisearch\ way, the \minisearch\ runtime implements those built-in functions, computing the correct value whenever a predicate is evaluated.
The \gls{neighbourhood} definitions from the previous section have an important property that makes them easy to rewrite: they are defined in terms of standard \minizinc{} expressions, except for a few new functions.
When the \gls{neighbourhood} predicates are evaluated in the \minisearch{} way, the \minisearch{} \interpreter{} implements those functions.
It has to compute the correct value whenever a predicate is evaluated.
Instead, the compilation scheme presented below uses a limited form of partial evaluation: parameters known at compile time will be fully evaluated; those only known during the solving, such as the result of a call to any of the new functions (\mzninline{sol}, \mzninline{status}, etc.)
, are replaced by decision variables.
This essentially \textbf{turns the new built-in functions into \constraints{}} that have to be supported by the target solver.
The neighbourhood predicate can then be added as a \constraint{} to the model.
The evaluation is performed by hijacking the solver's own capabilities: It will automatically perform the evaluation of the new functions by propagating the new \constraints{}.
Instead, the \gls{rewriting} scheme presented below uses a limited form of partial evaluation: \parameters{}, known during \gls{rewriting} will be fully evaluated; those only known during the solving, such as the result of a call to any of the new functions (\mzninline{sol}, \mzninline{status}, etc.), are replaced by \variables{}.
This essentially \textbf{turns the new functions into \gls{native} \constraints{}} for the target \solver{}.
The \gls{neighbourhood} predicate can then be added as a \constraint{} to the \gls{slv-mod}.
The evaluation is performed by hijacking the solver's own capabilities: It will automatically perform the evaluation of the new functions by propagating the \gls{native} \constraints{}.
To compile a \gls{lns} specification to standard \flatzinc{}, the \minizinc\ compiler performs four simple steps:
To compile a \gls{meta-optimisation} method to a \gls{slv-mod}, the \gls{rewriting} process performs the following four simple steps.
\begin{enumerate}
\item Replace the annotation \mzninline{::on_restart("X")} with a call to predicate \mzninline{X}.
\item Inside predicate \mzninline{X} and any other predicate called recursively from \mzninline{X}: treat any call to built-in functions \mzninline{sol}, \mzninline{status}, and \mzninline{last_val} as returning a \mzninline{var} instead of a \mzninline{par} value; and rename calls to random functions, e.g., \mzninline{uniform} to \mzninline{uniform_slv}, in order to distinguish them from their standard library versions.
\item Convert any expression containing a call from step 2 to \mzninline{var} to ensure the functions are compiled as \constraints{}, rather than statically evaluated by the \minizinc\ compiler.
\item Compile the resulting model using an extension of the \minizinc\ standard library that provides declarations for these built-in functions, as defined below.
\item It replaces the annotation \mzninline{::on_restart("X")} with a call to predicate \mzninline{X}.
\item Inside predicate \mzninline{X} and any other predicate called recursively from \mzninline{X}: it must treat any call to functions \mzninline{sol}, \mzninline{status}, and \mzninline{last_val} as returning a \variable{}; and rename calls to random functions, e.g., \mzninline{uniform} to \mzninline{uniform_slv}, in order to distinguish them from their standard library versions.
\item It converts any expression containing a call from step 2 to be a \variable{} to ensure the functions are compiled as \constraints{}, rather than directly evaluated during \gls{rewriting}.
\item It rewrites the resulting model using an extension of the \minizinc{} standard library that provides declarations for these functions, as defined in the following of subsection.
\end{enumerate}
These transformations will not change the code of many neighbourhood definitions, since the built-in functions are often used in positions that accept both parameters and variables.
These transformations will not change the code of many \gls{neighbourhood} definitions, since the functions are often used in positions that accept both \parameters{} and \variables{}.
For example, the \mzninline{uniform_neighbourhood} predicate from \cref{lst:6-lns-minisearch-pred} uses \mzninline{uniform(0.0, 1.0)} in an \mzninline{if} expression, and \mzninline{sol(x[i])} in an equality \constraint{}.
Both expressions can be translated to \flatzinc\ when the functions return a \mzninline{var}.
Both expressions can be rewritten when the functions return a \variable{}.
\subsection{Compiling the new built-ins}
\subsection{Rewriting the new functions}
We can compile models that contain the new built-ins by extending the \minizinc\ standard library as follows.
We can rewrite \instances{} that contain the new functions by extending the \minizinc{} standard library using the following definitions.
\paragraph{\mzninline{status}}
@ -316,11 +350,13 @@ It simply replaces the functional form by a predicate \mzninline{status} (declar
\paragraph{\mzninline{sol} and \mzninline{last_val}}
Since \mzninline{sol} is overloaded for different variable types and \flatzinc\ does not support overloading, we produce type-specific built-ins for every type of solver variable (\mzninline{int_sol(x, xi)}, \mzninline{bool_sol(x, xi)}, etc.)
. The resolving of the \mzninline{sol} function into these specific built-ins is done using an overloaded definition like the one shown in~\Cref{lst:6-int-sol} for integer variables.
If the value of the variable in question becomes known at compile time, we use that value instead.
Otherwise, we replace the function call with a type specific \mzninline{int_sol} predicate, which is the \constraint{} that will be executed by the solver.
The \mzninline{sol} function is overloaded for different types.
Our \glspl{slv-mod} does not support overloading.
Therefore, we produce type-specific \gls{native} \constraints{} for every type of \gls{native} \variable{} (\eg{} \mzninline{int_sol(x, xi)}, and \mzninline{bool_sol(x, xi)}).
The resolving of the \mzninline{sol} function into these specific \gls{native} \constraints{} is done using an overloaded definition, like the one shown in \cref{lst:6-int-sol} for integer \variables{}.
If the value of the \variable{} has become known during \gls{rewriting}, then we use its \gls{fixed} value instead.
Otherwise, the function call is rewritten into a type specific \mzninline{int_sol} \gls{native} \constraint{} that will be executed by the solver.
We use the current \domain{} of the \variable{}, \mzninline{dom(x)}, as the \domain{} the \variable{} returned by \mzninline{sol}.
\begin{listing}
\mznfile{assets/listing/6_sol_function.mzn}
@ -328,16 +364,15 @@ Otherwise, we replace the function call with a type specific \mzninline{int_sol}
function for integer variables}
\end{listing}
To improve the compilation of the model further, we use the declared bounds of the argument (\mzninline{lb(x)..ub(x)}) to constrain the variable returned by \mzninline{sol}.
This bounds information is important for the compiler to be able to generate the most efficient \flatzinc\ code for expressions involving \mzninline{sol}.
The compilation of \mzninline{last_val} is similar to that for \mzninline{sol}.
\paragraph{Random number functions}
Calls to the random number functions have been renamed by appending \texttt{\_slv}, so that the compiler does not simply evaluate them statically.
Calls to the random number functions have been renamed by appending \texttt{\_slv}, so that they are not simply evaluates directly.
The definition of these new functions follows the same pattern as for \mzninline{sol}, \mzninline{status}, and \mzninline{last_val}.
The MiniZinc definition of the \mzninline{uniform_nbh} function is shown in \Cref{lst:6-int-rnd}\footnote{Random number functions need to be marked as \mzninline{::impure} for the compiler not to apply \gls{cse} when they are called multiple times with the same arguments.}.
Note that the function accepts variable arguments \mzninline{l} and \mzninline{u}, so that it can be used in combination with other functions, such as \mzninline{sol}.
The \minizinc{} definition of the \mzninline{uniform_nbh} function is shown in \Cref{lst:6-int-rnd}\footnote{Random number functions need to be marked as \mzninline{::impure} for the compiler not to apply \gls{cse} when they are called multiple times with the same arguments.}.
Note that the function accepts \variable{} as its arguments \mzninline{l} and \mzninline{u}.
As such, it can be used in combination with other functions, such as \mzninline{sol}.
\begin{listing}
\mznfile{assets/listing/6_uniform_slv.mzn}
@ -345,61 +380,64 @@ Note that the function accepts variable arguments \mzninline{l} and \mzninline{u
\mzninline{uniform_nbh} function for floats}
\end{listing}
\subsection{Solver support for restart-based built-ins}
\paragraph{Complete}
We will now show the minimal extensions required from a solver to interpret the new \flatzinc\ \constraints{} and, consequently, to execute \gls{lns} definitions expressed in \minizinc{}.
\todo{write something}
First, the solver needs to parse and support the restart annotations of~\cref{lst:6-restart-ann}.
Many solvers already support all this functionality.
Second, the solver needs to be able to parse the new \constraints{} \mzninline{status}, and all versions of \mzninline{sol}, \mzninline{last_val}, and random number functions like \mzninline{float_uniform}.
In addition, for the new \constraints{} the solver needs to:
\subsection{Solver Support for Restart Based Native Constraints}
\begin{itemize}
\item \mzninline{status(s)}: record the status of the previous restart, and fix \mzninline{s} to the recorded status.
\item \mzninline{sol(x, sx)} (variants): constrain \mzninline{sx} to be equal to the value of \mzninline{x} in the incumbent solution.
We will now show the minimal extensions required from a \solver{} to interpret the new \gls{native} \constraints{} and, consequently, to execute \gls{meta-optimisation} definitions expressed in \minizinc{}.
First, the \solver{} needs to parse and support the \gls{restart} \glspl{annotation} of \cref{lst:6-restart-ann}.
Many \solvers{} already support all this functionality.
Second, the \solver{} needs to be able to parse the new \constraints{} \mzninline{status}, and all versions of \mzninline{sol}, \mzninline{last_val}, random number functions like \mzninline{float_uniform}, and \mzninline{complete}.
In addition, for the new \constraints{} the \solver{} is extended using the following methods.
\begin{description}
\item[\mzninline{status(s)}] Record the status of the previous \gls{restart}, and fix \mzninline{s} to the recorded status.
\item[\mzninline{sol(x, sx)} (variants)] Constrain \mzninline{sx} to be equal to the value of \mzninline{x} in the incumbent solution.
If there is no incumbent solution, it has no effect.
\item \mzninline{last_val(x, lx)} (variants): constrain \mzninline{lx} to take the last value assigned to \mzninline{x} during search.
\item[\mzninline{last_val(x, lx)} (variants)] Constrain \mzninline{lx} to take the last value assigned to \mzninline{x} during search.
If no value was ever assigned, it has no effect.
Note that many solvers (in particular \gls{sat} and \gls{lcg} solvers) already track \mzninline{last_val} for their variables for use in search.
To support \gls{lns} a solver must at least track the last value of each of the variables involved in such a \constraint{}.
This is straightforward by using the \mzninline{last_val} propagator itself.
It wakes up whenever the first argument is fixed, and updates the last value (a non-backtrackable value).
\item Random number functions: fix their variable argument to a random number in the appropriate probability distribution.
\end{itemize}
Note that many solvers (in particular \gls{sat} and \gls{lcg} solvers) already track \mzninline{last_val} for their \variables{} for use in search.
To support \gls{rbmo} a \solver{} must at least track the last value of each of the \variables{} involved in such a \constraint{}.
This is straightforward by using a \mzninline{last_val} \gls{propagator}.
It wakes up whenever its argument is fixed, and updates the last value (a non-\gls{backtrack}able value).
\item[Random number functions] Fix their \variable{} argument to a random number in the appropriate probability distribution.
\item[\mzninline{complete_reif(c)}] Stop and mark to search as ``complete'' when its \variable{} argument is forced to be true.
\end{description}
Importantly, these \constraints{} need to be propagated in a way that their effects can be undone for the next restart.
Typically, this means the solver must not propagate these \constraints{} in the root node of the search.
Importantly, these \constraints{} need to be propagated in a way that their effects can be undone for the next \gls{restart}.
Typically, this means the solver must not propagate these \constraints{} until it can \gls{backtrack} it changes.
Modifying a solver to support this functionality is straightforward if it already has a mechanism for posting \constraints{} during restarts.
We have implemented these extensions for both \gls{gecode} (110 new lines of code) and \gls{chuffed} (126 new lines of code).
We have implemented these extensions for both \gls{gecode} (214 new lines of code) and \gls{chuffed} (173 new lines of code).
For example, consider the model from \cref{lst:6-basic-complete} again.
\Cref{lst:6-flat-pred} shows a part of the \flatzinc\ that arises from compiling
For example, consider the model from \cref{lst:6-basic-complete} again that uses the following \gls{meta-optimisation} method.
\begin{mzn}
basic_lns(uniform_neighbourhood(x, 0.2))}
\end{mzn}
\noindent{}assuming that \mzninline{index_set(x) = 1..n}.
\Cref{lst:6-flat-pred} shows a part of the resulting \gls{slv-mod}.
\Lrefrange{line:6:status:start}{line:6:status:end} define a Boolean variable \mzninline{b1} that is true if-and-only-if the status is not \mzninline{START}.
The second block of code (\lrefrange{line:6:x1:start}{line:6:x1:end}) represents the decomposition of the expression
The second block of code, \lrefrange{line:6:x1:start}{line:6:x1:end}, represents the \gls{decomp} of the following expression.
\begin{mzn}
(status() != START /\ uniform(0.0,1.0) > 0.2) -> x[1] = sol(x[1])
(status() != START /\ uniform(0.0, 1.0) > 0.2) -> x[1] = sol(x[1])
\end{mzn}
which is the result of merging the implication from the \mzninline{basic_lns} predicate with the \mzninline{if} expression from \mzninline{uniformNeighbourhood}.
The code first introduces and constrains a variable for the random number, then adds two Boolean variables: \mzninline{b2} is constrained to be true if-and-only-if the random number is greater than \(0.2\); while \mzninline{b3} is constrained to be the conjunction of the two.
\lref{line:6:x1} constrains \mzninline{x1} to be the value of \mzninline{x[1]} in the previous solution.
Finally, the half-reified \constraint{} in \lref{line:6:x1:end} implements
It is the result of merging the implication from the \mzninline{basic_lns} predicate with the \mzninline{if} expression from \mzninline{uniformNeighbourhood}.
The code first introduces and constrains a variable for the random number, then adds two Boolean \variables{}: \mzninline{b2} is constrained to be true if-and-only-if the random number is greater than 0.2; while \mzninline{b3} is constrained to be the conjunction of the two.
\Lref{line:6:x1:sol} constrains \mzninline{x1} to be the value of \mzninline{x[1]} in the previous \gls{sol}.
Finally, the \gls{half-reified} \constraint{} in \lref{line:6:x1:end} implements the following expression.
\begin{mzn}
b3 -> x[1] = sol(x[1])
\end{mzn}
We have omitted the similar code generated for \mzninline{x[2]} to \mzninline{x[n]}.
Note that the \flatzinc\ shown here has been simplified for presentation.
Note that the \flatzinc{} \gls{slv-mod} shown here has been simplified for presentation.
\begin{listing}
\mznfile{assets/listing/6_basic_complete_transformed.mzn}
@ -407,113 +445,128 @@ Note that the \flatzinc\ shown here has been simplified for presentation.
\mzninline{basic_lns(uniformNeighbourhood(x,0.2))}.}
\end{listing}
The first time the solver is invoked, it sets \mzninline{s} to 1 (\mzninline{START}).
Propagation will fix \mzninline{b1} to \mzninline{false} and \mzninline{b3} to \mzninline{false}.
The first time the \solver{} is invoked, it sets \mzninline{s} to one (\mzninline{START}).
\Gls{propagation} will fix \mzninline{b1} to \mzninline{false} and \mzninline{b3} to \mzninline{false}.
Therefore, the implication in \lref{line:6:x1:end} is not activated, leaving \mzninline{x[1]} unconstrained.
The neighbourhood \constraints{} are effectively switched off.
The \gls{neighbourhood} \constraints{} are effectively switched off.
When the solver restarts, all the special propagators are re-executed.
Now \mzninline{s} is not 1, and \mzninline{b1} will be set to \mzninline{true}.
The \mzninline{float_random} propagator assigns \mzninline{rnd1} a new random value and, depending on whether it is greater than \mzninline{0.2}, the Boolean variables \mzninline{b2}, and consequently \mzninline{b3} will be assigned.
If it is \mzninline{true}, the \constraint{} in line \lref{line:6:x1:end} will become active and assign \mzninline{x[1]} to its value in the previous solution.
When the \solver{} \glspl{restart}, all the special \glspl{propagator} are re-executed.
Now \mzninline{s} is not one, and \mzninline{b1} is set to \mzninline{true}.
The \mzninline{float_random} \gls{propagator} assigns \mzninline{rnd1} a new random value and, depending on whether it is greater than \mzninline{0.2}, the Boolean \variables{} \mzninline{b2}, and consequently \mzninline{b3} is assigned.
If it is \mzninline{true}, the \constraint{} in \lref{line:6:x1:end} becomes active and assigns \mzninline{x[1]} to its value in the previous \gls{sol}.
Furthermore, it is not strictly necessary to guard \mzninline{int_uniform} against being invoked before the first solution is found, since the \mzninline{sol} \constraints{} will simply not propagate anything in case no solution has been recorded yet, but we use this simple example to illustrate how these Boolean conditions are compiled and evaluated.
Furthermore, it is not strictly necessary to guard \mzninline{int_uniform} against being invoked before the first solution is found.
The \mzninline{sol} \constraints{} will simply not propagate anything in case no \gls{sol} has been recorded yet, but we use this simple example to illustrate how these Boolean conditions are rewritten and evaluated.
\section{An Incremental Interface for Constraint Modelling Languages}%
\section{An Incremental Constraint Modelling Interface}%
\label{sec:6-incremental-compilation}
As an alternative approach to run \gls{meta-optimisation} algorithm, we propose the possibility of incremental flattening.
The execution of any
A universal approach to incremental solving methods and \gls{meta-optimisation} is to allow incremental changes to an \instance{} of a \cmodel{}.
To solve these changing \instances{}, they have to be rewritten repeatedly to \glspl{slv-mod}.
In this section we extend our architecture with a incremental constraint modelling interface that allows the modeller to change an \instance{}, while minimising the required \gls{rewriting}.
In order to support incremental flattening, the \nanozinc\ interpreter must be able to process \nanozinc\ calls \textbf{added} to an existing \nanozinc\ program, as well as to \textbf{remove} calls from an existing \nanozinc\ program.
Adding new calls is straightforward, since \nanozinc\ is already processed call-by-call.
As such, the \microzinc{} \interpreter{} is extended to be able \textbf{add} and \textbf{remove} \nanozinc{} \constraints{} from/to an existing \nanozinc{} program.
Adding new \constraints{} is straightforward.
\nanozinc{} is already processed one \constraint{} at a time, in any order.
The new \constraints{} can be added to the program, and the \gls{rewriting} can proceed as normal.
Removing a call, however, is not so simple.
When we remove a call, all effects the call had on the \nanozinc\ program have to be undone, including results of propagation, \gls{cse} and other simplifications.
Removing a \constraint{}, however, is not so simple.
When we remove a \constraint{}, all effects the \constraint{} had on the \nanozinc{} program have to be undone.
This includes results of \gls{propagation}, \gls{cse} and other simplifications.
\begin{example}\label{ex:6-incremental}
Consider the following \minizinc\ fragment:
\begin{mzn}
constraint x < 10;
constraint y < x;
\end{mzn}
constraint x < 10;
constraint y < x;
\end{mzn}
After evaluating the first \constraint{}, the domain of \mzninline{x} is changed to be less than 10\.
Evaluating the second \constraint{} causes the domain of \mzninline{y} to be less than 9\.
If we now, however, try to remove the first \constraint{}, it is not just the direct inference on the domain of \mzninline{x} that has to be undone, but also any further effects of those changes --- in this case, the changes to the domain of \mzninline{y}.
After evaluating the first \constraint{}, the \domain{} of \mzninline{x} is changed to be less than ten.
Evaluating the second \constraint{} causes the \domain{} of \mzninline{y} to be less than nine.
If we now, however, try to remove the first \constraint{}, then it is not just the direct inference on the \domain{} of \mzninline{x} that has to be undone, but also any further effects of those changes --- in this case, the changes to the \domain{} of \mzninline{y}.
\end{example}
Due to this complex interaction between calls, we only support the removal of calls in reverse chronological order, also known as \textit{backtracking}.
The common way of implementing \gls{backtrack}ing is using a \textit{trail} data structure~\autocite{warren-1983-wam}.
The \gls{trail} records all changes to the \nanozinc\ program:
The addition of \constraints{} that can later be removed, is similar to making \glspl{search-decision} in a \gls{cp} \solver{} (see \cref{subsec:back-cp}).
As such, we can support the removal of \constraints{} in reverse chronological order by \gls{backtrack}ing.
The \microzinc{} \interpreter{} maintains \gls{trail} of the changes to be undone when a \constraint{} is removed.
The \gls{trail} records all changes to the \nanozinc{} program:
\begin{itemize}
\item The addition or removal of new variables or \constraints{},
\item changes made to the domains of variables,
\item the addition or removal of new \variables{} or \constraints{},
\item changes made to the \domains{} of \variables{},
\item additions to the \gls{cse} table, and
\item substitutions made due to equality propagation.
\item substitutions made due to equality \gls{propagation}.
\end{itemize}
These changes can be caused by the evaluation of a call, propagation, or \gls{cse}.
When a call is removed, the corresponding changes can now be undone by reversing any action recorded on the \gls{trail} up to the point where the call was added.
These changes can be caused by the \gls{rewriting} of a \constraint{}, \gls{propagation}, or \gls{cse}.
When a \constraint{} is removed, its corresponding changes are undone by reversing all actions recorded on the \gls{trail} up to the point where the \constraint{} was added.
In order to limit the amount of \gls{trail}ing required, the programmer must create explicit \textit{choice points} to which the system state can be reset.
In particular, this means that if no choice point was created before the initial model was flattened, then this flattening can be performed without any \gls{trail}ing.
In order to limit the amount of \gls{trail}ing required, the programmer must create explicit ``choice points'' to which the system state can be reset.
In particular, this means that if no choice point was created before the initial \instance{} was rewritten, then this \gls{rewriting} can be performed without any \gls{trail}ing.
\begin{example}\label{ex:6-trail}
Let us look again at the resulting \nanozinc\ code from \cref{ex:rew-absreif}:
\todo{Fix example to new syntax}
% \begin{nzn}
% c @$\mapsto$@ true @$\sep$@ []
% x @$\mapsto$@ mkvar(-10..10) @$\sep$@ []
% y @$\mapsto$@ mkvar(-10..10) @$\sep$@ []
% true @$\mapsto$@ true @$\sep$@ []
% \end{nzn}
Let us once again consider the following \minizinc{} fragment from \cref{ex:rew-absreif}.
\begin{mzn}
constraint abs(x) > y \/ c;
constraint c;
\end{mzn}
After rewriting it results in the following \nanozinc{} program.
\begin{nzn}
var true: c;
var -10..10: x;
var -10..10: y;
\end{nzn}
Assume that we added a choice point before posting the \constraint{} \mzninline{c}.
Then the \gls{trail} stores the inverse of all modifications that were made to the \nanozinc\ as a result of \mzninline{c} (where \(\mapsfrom\) denotes restoring an identifier, and \(\lhd\) \texttt{+}/\texttt{-} respectively denote attaching and detaching \constraints{}):
Then the \gls{trail} stores the inverse of all modifications that were made to the \nanozinc{} as a result of \mzninline{c}.
The following code fragment illustrates the elements in this \gls{trail}.
The reversal actions are annotated on the different elements.
The \mzninline{attach} and \mzninline{detach} actions signal the \interpreter{} to attach/detach a \constraint{} to the argument \variable{}, or the global \cmodel{} in case the argument is \mzninline{true}.
The \mzninline{add} action tells to \interpreter{} to add a \variable{} to the \nanozinc{} program.
Finally, the \mzninline{set_domain} action makes the \interpreter{} restore the \domain{} using declaration to which it was attached.
\todo{Fix example to new syntax}
% \begin{nzn}
% % Posted c
% true @$\lhd$@ -[c]
% % Propagated c = true
% c @$\mapsfrom$@ mkvar(0,1) @$\sep$@ []
% true @$\lhd$@ +[c]
% % Simplified bool_or(b1, true) = true
% b2 @$\mapsfrom$@ bool_or(b1, c) @$\sep$@ []
% true @$\lhd$@ +[b2]
% % b1 became unused...
% b1 @$\mapsfrom$@ int_gt(t, y) @$\sep$@ []
% % causing t, then b0 and finally z to become unused
% t @$\mapsfrom$@ z @$\sep$@ [b0]
% b0 @$\mapsfrom$@ int_abs(x, z) @$\sep$@ []
% z @$\mapsfrom$@ mkvar(-infinity,infinity) @$\sep$@ []
% \end{nzn}
\begin{nzn}
% Posted c
constraint c ::detach(true);
% Propagated c = true
var bool: c ::set_domain;
constraint c ::attach(true);
% Simplified bool_or(b1, true) = true
constraint bool(b1, c) ::attach(true);
% b became unused...
constraint int_gt_reif(z, y, b) ::attach(b);
var bool: b ::add;
% then z became unused
constraint int_abs(x, z) ::attach(z);
var int: z ::add;
\end{nzn}
To reconstruct the \nanozinc\ program at the choice point, we simply apply the changes recorded in the \gls{trail}, in reverse order.
To reconstruct the \nanozinc{} program at the choice point, we simply apply the changes recorded in the \gls{trail}, in reverse order.
\end{example}
\subsection{Incremental Solving}
Ideally, the incremental changes made by the interpreter would also be applied incrementally to the solver.
This requires the solver to support both the dynamic addition and removal of variables and \constraints{}.
While some solvers can support this functionality, most solvers have limitations.
The system can therefore support solvers with different levels of an incremental interface:
Ideally, the incremental changes made by the \interpreter{} would also be applied incrementally to the \solver{}.
This requires the \solver{} to support both the dynamic addition and removal of \variables{} and \constraints{}.
While some solvers can support this functionality, most \solvers{} have limitations.
We therefore define the following three interfaces, using which we can apply the changes to the \solvers{}.
\begin{itemize}
\item Using a non-incremental interface, the solver is reinitialised with the updated \nanozinc\ program every time.
In this case, we still get a performance benefit from the improved flattening time, but not from incremental solving.
\item Using a non-incremental interface, the \solver{} is reinitialised with the updated \nanozinc\ program every time.
In this case, we still get a performance benefit from the improved \gls{rewriting} time, but not from incremental solving.
\item Using a \textit{warm-starting} interface, the solver is reinitialised with the updated program as above, but it is also given a previous solution to initialise some internal data structures.
In particular for mathematical programming solvers, this can result in dramatic performance gains compared to ``cold-starting'' the solver every time.
\item Using a \textit{warm-starting} interface, the \solver{} is reinitialised with the updated \gls{slv-mod} as above, but it is also given a previous \gls{sol} to initialise some internal data structures.
In particular for mathematical programming \solvers{}, this can result in dramatic performance gains compared to ``cold-starting'' the \solver{} every time.
\item Using a fully incremental interface, the solver is instructed to apply the changes made by the interpreter.
In this case, the \gls{trail} data structure is used to compute the set of \nanozinc\ changes since the last choice point.
\item Using a fully incremental interface, the \solver{} is instructed to apply the changes made by the \interpreter{}.
In this case, the \gls{trail} data structure is used to compute the set of \nanozinc{} changes since the last choice point.
\end{itemize}

View File

@ -1,5 +1,5 @@
\noindent{}In previous chapters we explored the compilation of \cmodels{} for a \gls{solver} as a definitive linear process, but to solve real-world problems \gls{meta-optimisation} algorithms a.re often used.
These methods usually require solving almost the same combinatorial problem repeatedly, with only slight modifications, thousands of times.
\noindent{}In previous chapters we explored \gls{rewriting} as a definitive linear process, but to solve real-world problems \gls{meta-optimisation} algorithms are often used.
These methods usually require solving similar problems repeatedly, with only slight modifications, thousands of times.
Examples of these methods are:
\begin{itemize}
@ -8,38 +8,41 @@ Examples of these methods are:
Instead, it can be solved using a \gls{meta-optimisation} approach: find a solution to a (single-objective) problem, then add more \constraints{} to the original problem and repeat.
\item \gls{lns} \autocite{shaw-1998-local-search}.
This is a very successful \gls{meta-optimisation} algorithm to quickly improve solution quality.
After finding a (sub-optimal) solution to a problem, \constraints{} are added to restrict the search in the \gls{neighbourhood} of that solution.
When a new solution is found, the \constraints{} are removed, and \constraints{} for a new \gls{neighbourhood} are added.
After finding a (sub-optimal) solution to a problem, \constraints{} are added to restrict the search in the \gls{neighbourhood} of that \gls{sol}.
When a new \gls{sol} is found, the \constraints{} are removed, and \constraints{} for a new \gls{neighbourhood} are added.
\item Online Optimisation \autocite{jaillet-2021-online}.
These techniques can be employed when the problem rapidly changes.
A problem instance is continuously updated with new data, such as newly available jobs to be scheduled or customer requests to be processed.
An \instance{} is continuously updated with new data, such as newly available jobs to be scheduled or customer requests to be processed.
\item Diverse Solution Search \autocite{hebrard-2005-diverse}.
Here we aim to provide a set of solutions that are sufficiently different from each other in order to give human decision makers an overview of the solution space.
Diversity can be achieved by repeatedly solving a problem instance with different objectives.
Here we aim to provide a set of solutions that are sufficiently different from each other in order to give human decision makers an overview of the possible \glspl{sol}.
Diversity can be achieved by repeatedly solving a problem instance with different \glspl{objective}.
\item Interactive Optimisation \autocite{belin-2014-interactive}.
In some scenarios it can be useful to allow a user to directly provide feedback on solutions found by the solver.
The feedback in the form of \constraint{} are added back into the problem, and a new solution is generated.
Users may also take back some earlier feedback and explore different aspects of the problem to arrive at the best solution that suits their needs.
In some scenarios it can be useful to allow a user to directly provide feedback on \gls{sol} found by the \solver{}.
The feedback in the form of \constraint{} are added back into the problem, and a new \gls{sol} is generated.
Users may also take back some earlier feedback and explore different aspects of the problem to arrive at the best \gls{sol} that suits their needs.
\end{itemize}
All of these examples have in common that a problem instance is solved, new \constraints{} are added, the resulting instance is solved again, and \constraints{} may be removed again.
All of these examples have in common that a \instance{} is solved, new \constraints{} are added, the resulting \instance{} is solved again, and the \constraints{} may be removed again.
The usage of these methods is not new to \constraint{} modelling, and they have proven to be very useful \autocite{schrijvers-2013-combinators, rendl-2015-minisearch, schiendorfer-2018-minibrass, ek-2020-online, ingmar-2020-diverse}.
In its most basic form, a simple scripting language is sufficient to implement these methods, by repeatedly calling on the \constraint{} modelling infrastructure to compile and solve the adjusted \constraint{} models.
While improvements of the compilation of \cmodels{}, such as the ones discussed in previous chapters, can increase the performance of these approaches, the overhead of re-compiling an almost identical model may still prove prohibitive, warranting direct support from the \constraint{} modelling infrastructure.
The usage of these methods is not new to \cmls{} and they have proven to be very useful \autocite{schrijvers-2013-combinators, rendl-2015-minisearch, schiendorfer-2018-minibrass, ek-2020-online, ingmar-2020-diverse}.
In its most basic form, a simple scripting language is sufficient to implement these methods, by repeatedly \gls{rewriting} and solving the adjusted \instances{}.
While improvements of the \gls{rewriting} process, such as the ones discussed in previous chapters, can increase the performance of these approaches, the overhead of rewriting an almost identical model may still prove prohibitive.
It warrants direct support from the \cml{} architecture.
In this chapter we introduce two methods to provide this support:
\begin{itemize}
\item We introduce the notion of restart-based \gls{meta-optimisation} algorithms.
Using a minimal extension to a \cml\ and its target solver, we can model some \gls{meta-optimisation} algorithms and compile \gls{meta-optimisation} algorithms into efficient solver-level specifications based on solver restarts, avoiding re-compilation all-together.
\item Alternatively, we can add an incremental interface for adding and removing \constraints{} to the infrastructure of the \cml{}.
Although this does not avoid the need for re-compilation, it can reduce the work to only the part of the \constraint{} model that has changed.
This approach can be used when an algorithm cannot be described using restart-based \gls{meta-optimisation} or required extension is not available for the solver.
\item We introduce the notion of \gls{rbmo} algorithms.
Using a minimal extension to a \cml{} and its target \solver{}, we can model \gls{meta-optimisation} methods and rewrite them into efficient \glspl{slv-mod}.
The \solver{} will then incrementally execute the methods through the use of \solver{} \glspl{restart}
This method avoids the need of repeatedly \gls{rewriting} the \instance{} all together.
\item Alternatively, we extend our architecture with an incremental interface for adding and removing \constraints{}.
Although this method does not avoid repeatedly \gls{rewriting} the \instance{}, the \gls{rewriting} is reduced to the changes to the \instance{}.
This approach can be used when an incremental method cannot be described using \gls{rbmo} or when the required extensions are not available for the target \solver{}.
\end{itemize}
The rest of the chapter is organised as follows.
\Cref{sec:6-modelling} discusses the declarative modelling of restart-based \gls{meta-optimisation} algorithms that can be modelled directly in a \cml{}.
\Cref{sec:6-solver-extension} introduces the method to compile these \gls{meta-optimisation} specifications into efficient solver-level specifications that only require a small extension of existing \glspl{solver}.
\Cref{sec:6-incremental-compilation} introduces the alternative method that extends the \constraint{} modelling infrastructure with an interface to add and remove \constraints{} from an existing model while avoiding recompilation.
\Cref{sec:6-modelling} discusses the declarative modelling of \gls{rbmo} methods in a \cml{}.
\Cref{sec:6-solver-extension} introduces the method to rewrite these \gls{meta-optimisation} definitions into efficient \glspl{slv-mod} and the minimal extension required from the target \gls{solver}.
\Cref{sec:6-incremental-compilation} introduces the alternative method that extends our architecture with an incremental \constraint{} modelling interface.
\Cref{sec:inc-experiments} reports on the experimental results of both approaches.