Grammar pass over Incremental Constraint Modelling
This commit is contained in:
parent
37bb92b3b9
commit
9dab14ab46
@ -113,7 +113,7 @@ Now, we will categorize the latter into the following three contexts.
|
|||||||
Determining the monotonicity of a \constraint{} \wrt{} an expression is a hard problem.
|
Determining the monotonicity of a \constraint{} \wrt{} an expression is a hard problem.
|
||||||
An expression might be monotone or antitone only through complex interactions, possibly through unknown \gls{native} \constraints{}.
|
An expression might be monotone or antitone only through complex interactions, possibly through unknown \gls{native} \constraints{}.
|
||||||
Therefore, for our analysis, we slightly relax these definitions.
|
Therefore, for our analysis, we slightly relax these definitions.
|
||||||
We say an expression is in \mixc{} context when it cannot be determined whether its enclosing \constraint{} is monotone or antitone \wrt{} the expression.
|
We say an expression is in \mixc{} context when it cannot be determined whether the enclosing \constraint{} is monotone or antitone \wrt{} the expression.
|
||||||
|
|
||||||
Expressions in \posc{} context are the ones we discussed before.
|
Expressions in \posc{} context are the ones we discussed before.
|
||||||
A Boolean expression in \posc{} context cannot be forced to be false.
|
A Boolean expression in \posc{} context cannot be forced to be false.
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
\section{Modelling of Restart Based Meta-Optimisation}\label{sec:inc-modelling}
|
\section{Modelling of Restart Based Meta-Optimisation}\label{sec:inc-modelling}
|
||||||
|
|
||||||
This section introduces a \minizinc{} extension that enables modellers to define \gls{meta-optimization} algorithms in \minizinc{}.
|
This section introduces a \minizinc{} extension that enables modellers to define \gls{meta-optimization} algorithms in \minizinc{}.
|
||||||
This extension is based on the construct introduced in \minisearch{} \autocite{rendl-2015-minisearch}, as summarised below.
|
This extension is based on the construct introduced in \minisearch{} \autocite{rendl-2015-minisearch}, as summarized below.
|
||||||
|
|
||||||
\subsection{Meta-Optimisation in MiniSearch}\label{sec:inc-minisearch}
|
\subsection{Meta-Optimisation in MiniSearch}\label{sec:inc-minisearch}
|
||||||
|
|
||||||
@ -49,7 +49,7 @@ The second part of a \minisearch{} \gls{meta-optimization} is the \gls{meta-opti
|
|||||||
It performs a fixed number of iterations, each invoking the \gls{neighbourhood} predicate \mzninline{uniform_neighbourhood} in a fresh scope.
|
It performs a fixed number of iterations, each invoking the \gls{neighbourhood} predicate \mzninline{uniform_neighbourhood} in a fresh scope.
|
||||||
This means that the \constraints{} only affect the current loop iteration.
|
This means that the \constraints{} only affect the current loop iteration.
|
||||||
It then searches for a solution (\mzninline{minimize_bab}) with a given timeout.
|
It then searches for a solution (\mzninline{minimize_bab}) with a given timeout.
|
||||||
If the search does return a new solution, then it commits to that solution and it becomes available to the \mzninline{sol} function in subsequent iterations.
|
If the search does return a new solution, then it commits to that solution, and it becomes available to the \mzninline{sol} function in subsequent iterations.
|
||||||
The \mzninline{lns} function also posts the \constraint{} \mzninline{obj < sol(obj)}, ensuring the objective value in the next iteration is strictly better than that of the current solution.
|
The \mzninline{lns} function also posts the \constraint{} \mzninline{obj < sol(obj)}, ensuring the objective value in the next iteration is strictly better than that of the current solution.
|
||||||
|
|
||||||
Although \minisearch{} enables the modeller to express \glspl{neighbourhood} in a declarative way, the definition of the \gls{meta-optimization} algorithm is rather unintuitive and difficult to debug, leading to unwieldy code for defining even simple algorithm.
|
Although \minisearch{} enables the modeller to express \glspl{neighbourhood} in a declarative way, the definition of the \gls{meta-optimization} algorithm is rather unintuitive and difficult to debug, leading to unwieldy code for defining even simple algorithm.
|
||||||
@ -61,8 +61,8 @@ To address these two issues, we propose to keep modelling \glspl{neighbourhood}
|
|||||||
We define a few additional \minizinc{} \glspl{annotation} and functions that
|
We define a few additional \minizinc{} \glspl{annotation} and functions that
|
||||||
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item allow us to express important aspects of the meta-optimization algorithm in a more convenient way,
|
\item allow us to express important aspects of the meta-optimization algorithm in a more convenient way, and
|
||||||
\item and enable a simple \gls{rewriting} scheme that requires no additional communication with and only small, simple extensions of the target \solver{}.
|
\item enable a simple \gls{rewriting} scheme that requires no additional communication with and only small, simple extensions of the target \solver{}.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
\subsection{Restart Annotation}
|
\subsection{Restart Annotation}
|
||||||
@ -117,7 +117,7 @@ All values are stored in normal \variables{}.
|
|||||||
We then access them using two simple functions that reveal the solver state of the previous \gls{restart}.
|
We then access them using two simple functions that reveal the solver state of the previous \gls{restart}.
|
||||||
This approach is sufficient for expressing many \gls{meta-optimization} algorithms, and its implementation is much simpler.
|
This approach is sufficient for expressing many \gls{meta-optimization} algorithms, and its implementation is much simpler.
|
||||||
|
|
||||||
\paragraph{State access and initialisation}
|
\paragraph{State access and initialization}
|
||||||
|
|
||||||
The state access functions are defined in \cref{lst:inc-state-access}.
|
The state access functions are defined in \cref{lst:inc-state-access}.
|
||||||
Function \mzninline{status} returns the status of the previous restart, namely:
|
Function \mzninline{status} returns the status of the previous restart, namely:
|
||||||
@ -139,14 +139,14 @@ Like \mzninline{sol}, it has versions for all basic \variable{} types.
|
|||||||
\caption{\label{lst:inc-state-access} Functions for accessing previous \solver{} states}
|
\caption{\label{lst:inc-state-access} Functions for accessing previous \solver{} states}
|
||||||
\end{listing}
|
\end{listing}
|
||||||
|
|
||||||
In order to be able to initialise the \variables{} used for state access, we interpret \mzninline{on_restart} also before initial search using the same semantics.
|
In order to be able to initialize the \variables{} used for state access, we interpret \mzninline{on_restart} also before initial search using the same semantics.
|
||||||
As such, the predicate is also called before the first ``real'' \gls{restart}.
|
As such, the predicate is also called before the first ``real'' \gls{restart}.
|
||||||
Any \constraint{} posted by the predicate will be retracted for the next \gls{restart}.
|
Any \constraint{} posted by the predicate will be retracted for the next \gls{restart}.
|
||||||
|
|
||||||
\paragraph{Parametric neighbourhood selection predicates}
|
\paragraph{Parametric neighbourhood selection predicates}
|
||||||
|
|
||||||
We define standard \gls{neighbourhood} selection strategies as predicates that are parametric over the \glspl{neighbourhood} they should apply.
|
We define standard \gls{neighbourhood} selection strategies as predicates that are parametric over the \glspl{neighbourhood} they should apply.
|
||||||
For example, we can define a strategy \mzninline{basic_lns} that applies an \gls{lns} \gls{neighbourhood}.
|
For example, we can define a strategy \mzninline{basic_lns} that applies a \gls{lns} \gls{neighbourhood}.
|
||||||
Since \mzninline{on_restart} now also includes the initial search, we apply the \gls{neighbourhood} only if the current status is not \mzninline{START}, as shown in the following predicate.
|
Since \mzninline{on_restart} now also includes the initial search, we apply the \gls{neighbourhood} only if the current status is not \mzninline{START}, as shown in the following predicate.
|
||||||
|
|
||||||
\begin{mzn}
|
\begin{mzn}
|
||||||
@ -172,7 +172,7 @@ Therefore, users have to define their overall strategy in a new predicate.
|
|||||||
We can also define round-robin and adaptive strategies using these primitives.
|
We can also define round-robin and adaptive strategies using these primitives.
|
||||||
\Cref{lst:inc-round-robin} defines a round-robin \gls{lns} meta-heuristic, which cycles through a list of \mzninline{N} neighbourhoods \mzninline{nbhs}.
|
\Cref{lst:inc-round-robin} defines a round-robin \gls{lns} meta-heuristic, which cycles through a list of \mzninline{N} neighbourhoods \mzninline{nbhs}.
|
||||||
To do this, it uses the \variable{} \mzninline{select}.
|
To do this, it uses the \variable{} \mzninline{select}.
|
||||||
In the initialisation phase (\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which means none of the \glspl{neighbourhood} is activated.
|
In the initialization phase (\mzninline{status()=START}), \mzninline{select} is set to \mzninline{-1}, which means none of the \glspl{neighbourhood} is activated.
|
||||||
In any following \gls{restart}, \mzninline{select} is incremented modulo \mzninline{N}, by accessing the last value assigned in a previous \gls{restart}.
|
In any following \gls{restart}, \mzninline{select} is incremented modulo \mzninline{N}, by accessing the last value assigned in a previous \gls{restart}.
|
||||||
This will activate a different \gls{neighbourhood} for each subsequent \gls{restart} (\lref{line:6:roundrobin:post}).
|
This will activate a different \gls{neighbourhood} for each subsequent \gls{restart} (\lref{line:6:roundrobin:post}).
|
||||||
|
|
||||||
@ -192,9 +192,9 @@ For adaptive \gls{lns}, a simple strategy is to change the size of the \gls{neig
|
|||||||
\caption{\label{lst:inc-adaptive} A simple adaptive neighbourhood}
|
\caption{\label{lst:inc-adaptive} A simple adaptive neighbourhood}
|
||||||
\end{listing}
|
\end{listing}
|
||||||
|
|
||||||
\subsection{Optimisation strategies}
|
\subsection{Optimization strategies}
|
||||||
|
|
||||||
The \gls{meta-optimization} algorithms we have seen so far rely on the default behaviour of \minizinc{} \solvers{} to use \gls{bnb} for optimisation: when a new \gls{sol} is found, the \solver{} adds a \constraint{} to the remainder of the search to only accept better \glspl{sol}, as defined by the \gls{objective} in the \mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve} item.
|
The \gls{meta-optimization} algorithms we have seen so far rely on the default behaviour of \minizinc{} \solvers{} to use \gls{bnb} for optimization: when a new \gls{sol} is found, the \solver{} adds a \constraint{} to the remainder of the search to only accept better \glspl{sol}, as defined by the \gls{objective} in the \mzninline{minimize} or \mzninline{maximize} clause of the \mzninline{solve} item.
|
||||||
When combined with \glspl{restart} and \gls{lns}, this is equivalent to a simple hill-climbing meta-heuristic.
|
When combined with \glspl{restart} and \gls{lns}, this is equivalent to a simple hill-climbing meta-heuristic.
|
||||||
|
|
||||||
We can use the constructs introduced above to implement alternative meta-heuristics such as simulated annealing.
|
We can use the constructs introduced above to implement alternative meta-heuristics such as simulated annealing.
|
||||||
@ -203,7 +203,7 @@ It will still use the declared \gls{objective} to decide whether a new solution
|
|||||||
This maintains the convention of \minizinc{} \solvers{} that the last \gls{sol} printed at any point in time is the currently best known one.
|
This maintains the convention of \minizinc{} \solvers{} that the last \gls{sol} printed at any point in time is the currently best known one.
|
||||||
|
|
||||||
With \mzninline{restart_without_objective}, the \gls{restart} predicate is now responsible for constraining the \gls{objective}.
|
With \mzninline{restart_without_objective}, the \gls{restart} predicate is now responsible for constraining the \gls{objective}.
|
||||||
Note that a simple hill-climbing (for minimisation) can still be defined easily in this context as follows.
|
Note that a simple hill-climbing (for minimization) can still be defined easily in this context as follows.
|
||||||
|
|
||||||
\begin{mzn}
|
\begin{mzn}
|
||||||
predicate hill_climbing() = status() != START -> _objective < sol(_objective);
|
predicate hill_climbing() = status() != START -> _objective < sol(_objective);
|
||||||
@ -222,7 +222,7 @@ This \gls{meta-optimization} can help improve the qualities of \gls{sol} quickly
|
|||||||
\caption{\label{lst:inc-sim-ann}A predicate implementing a simulated annealing search.}
|
\caption{\label{lst:inc-sim-ann}A predicate implementing a simulated annealing search.}
|
||||||
\end{listing}
|
\end{listing}
|
||||||
|
|
||||||
So far, the algorithms used have been for versions of incomplete search or we have trusted the \solver{} to know when to stop searching.
|
So far, the algorithms used have been for versions of incomplete search, or we have trusted the \solver{} to know when to stop searching.
|
||||||
However, for the following algorithms the \solver{} will (or should) not be able to determine whether the search is complete.
|
However, for the following algorithms the \solver{} will (or should) not be able to determine whether the search is complete.
|
||||||
Instead, we introduce the following function that can be used to signal to the solver that the search is complete.
|
Instead, we introduce the following function that can be used to signal to the solver that the search is complete.
|
||||||
|
|
||||||
@ -233,7 +233,7 @@ Instead, we introduce the following function that can be used to signal to the s
|
|||||||
\noindent{}When the result of this function is said to be \mzninline{true}, then search is complete.
|
\noindent{}When the result of this function is said to be \mzninline{true}, then search is complete.
|
||||||
If any \gls{sol} was found, it is declared an \gls{opt-sol}.
|
If any \gls{sol} was found, it is declared an \gls{opt-sol}.
|
||||||
|
|
||||||
Using the same methods it is also possible to describe optimisation strategies with multiple \glspl{objective}.
|
Using the same methods it is also possible to describe optimization strategies with multiple \glspl{objective}.
|
||||||
An example of such a strategy is lexicographic search.
|
An example of such a strategy is lexicographic search.
|
||||||
Lexicographic search can be employed when there is a strict order between the importance of different \variables{}.
|
Lexicographic search can be employed when there is a strict order between the importance of different \variables{}.
|
||||||
It required that, once a \gls{sol} is found, each subsequent \gls{sol} must either improve the first \gls{objective}, or have the same value for the first \gls{objective} and improve the second \gls{objective}, or have the same value for the first two \glspl{objective} and improve the third \gls{objective}, and so on.
|
It required that, once a \gls{sol} is found, each subsequent \gls{sol} must either improve the first \gls{objective}, or have the same value for the first \gls{objective} and improve the second \gls{objective}, or have the same value for the first two \glspl{objective} and improve the third \gls{objective}, and so on.
|
||||||
@ -258,7 +258,7 @@ The predicate in \cref{lst:inc-pareto} shows a \gls{meta-optimization} for the P
|
|||||||
|
|
||||||
\begin{listing}
|
\begin{listing}
|
||||||
\mznfile{assets/listing/inc_pareto.mzn}
|
\mznfile{assets/listing/inc_pareto.mzn}
|
||||||
\caption{\label{lst:inc-pareto}A predicate implementing pareto optimality search for two \glspl{objective}.}
|
\caption{\label{lst:inc-pareto}A predicate implementing Pareto optimality search for two \glspl{objective}.}
|
||||||
\end{listing}
|
\end{listing}
|
||||||
|
|
||||||
In this implementation we keep track of the number of \glspl{sol} found so far using \mzninline{nsol}.
|
In this implementation we keep track of the number of \glspl{sol} found so far using \mzninline{nsol}.
|
||||||
@ -300,7 +300,7 @@ To compile a \gls{meta-optimization} algorithms to a \gls{slv-mod}, the \gls{rew
|
|||||||
\end{enumerate}
|
\end{enumerate}
|
||||||
|
|
||||||
These transformations will not change the code of many \gls{neighbourhood} definitions, since the functions are often used in positions that accept both \parameters{} and \variables{}.
|
These transformations will not change the code of many \gls{neighbourhood} definitions, since the functions are often used in positions that accept both \parameters{} and \variables{}.
|
||||||
For example, the \mzninline{uniform_neighbourhood} predicate from \cref{lst:inc-lns-minisearch-pred} uses \mzninline{uniform(0.0, 1.0)} in an \mzninline{if} expression, and \mzninline{sol(x[i])} in an equality \constraint{}.
|
For example, the \mzninline{uniform_neighbourhood} predicate from \cref{lst:inc-lns-minisearch-pred} uses \mzninline{uniform(0.0, 1.0)} in a \gls{conditional} expression, and \mzninline{sol(x[i])} in an equality \constraint{}.
|
||||||
Both expressions can be rewritten when the functions return a \variable{}.
|
Both expressions can be rewritten when the functions return a \variable{}.
|
||||||
|
|
||||||
\subsection{Rewriting the new functions}
|
\subsection{Rewriting the new functions}
|
||||||
@ -320,7 +320,7 @@ It simply replaces the functional form by a predicate \mzninline{status} (declar
|
|||||||
\paragraph{\mzninline{sol} and \mzninline{last_val}}
|
\paragraph{\mzninline{sol} and \mzninline{last_val}}
|
||||||
|
|
||||||
The \mzninline{sol} function is overloaded for different types.
|
The \mzninline{sol} function is overloaded for different types.
|
||||||
Our \glspl{slv-mod} does not support overloading.
|
Our \glspl{slv-mod} do not support overloading.
|
||||||
Therefore, we produce type-specific \gls{native} \constraints{} for every type of \gls{native} \variable{} (\eg{} \mzninline{int_sol(x, xi)}, and \mzninline{bool_sol(x, xi)}).
|
Therefore, we produce type-specific \gls{native} \constraints{} for every type of \gls{native} \variable{} (\eg{} \mzninline{int_sol(x, xi)}, and \mzninline{bool_sol(x, xi)}).
|
||||||
The resolving of the \mzninline{sol} function into these specific \gls{native} \constraints{} is done using an overloaded definition, like the one shown in \cref{lst:inc-int-sol} for integer \variables{}.
|
The resolving of the \mzninline{sol} function into these specific \gls{native} \constraints{} is done using an overloaded definition, like the one shown in \cref{lst:inc-int-sol} for integer \variables{}.
|
||||||
If the value of the \variable{} has become known during \gls{rewriting}, then we use its \gls{fixed} value instead.
|
If the value of the \variable{} has become known during \gls{rewriting}, then we use its \gls{fixed} value instead.
|
||||||
@ -441,9 +441,9 @@ The \mzninline{sol} \constraints{} will simply not propagate anything in case no
|
|||||||
A universal approach to the incremental usage of \cmls{}, and \gls{meta-optimization}, is to allow incremental changes to an \instance{} of a \cmodel{}.
|
A universal approach to the incremental usage of \cmls{}, and \gls{meta-optimization}, is to allow incremental changes to an \instance{} of a \cmodel{}.
|
||||||
To solve these changing \instances{}, they have to be rewritten repeatedly to \glspl{slv-mod}.
|
To solve these changing \instances{}, they have to be rewritten repeatedly to \glspl{slv-mod}.
|
||||||
In this section we extend our architecture with an incremental constraint modelling interface that allows the modeller to change an \instance{}.
|
In this section we extend our architecture with an incremental constraint modelling interface that allows the modeller to change an \instance{}.
|
||||||
For changes made using this interface, the architecture can employ \gls{incremental-rewriting} to minimise the required \gls{rewriting}.
|
For changes made using this interface, the architecture can employ \gls{incremental-rewriting} to minimize the required \gls{rewriting}.
|
||||||
|
|
||||||
As such, the \microzinc{} \interpreter{} is extended to be able \textbf{add} and \textbf{remove} \nanozinc{} \constraints{} from/to an existing \nanozinc{} program.
|
As such, the \microzinc{} \interpreter{} is extended to be able to \textbf{add} and \textbf{remove} \nanozinc{} \constraints{} from/to an existing \nanozinc{} program.
|
||||||
Adding new \constraints{} is straightforward.
|
Adding new \constraints{} is straightforward.
|
||||||
\nanozinc{} is already processed one \constraint{} at a time, in any order.
|
\nanozinc{} is already processed one \constraint{} at a time, in any order.
|
||||||
The new \constraints{} can be added to the program, and the \gls{rewriting} can proceed as normal.
|
The new \constraints{} can be added to the program, and the \gls{rewriting} can proceed as normal.
|
||||||
@ -537,10 +537,10 @@ We therefore define the following three interfaces, using which we can apply the
|
|||||||
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
|
|
||||||
\item Using a non-incremental interface, the \solver{} is reinitialised with the updated \nanozinc\ program every time.
|
\item Using a non-incremental interface, the \solver{} is reinitialized with the updated \nanozinc\ program every time.
|
||||||
In this case, we still get a performance benefit from the improved \gls{rewriting} time, but not from incremental solving.
|
In this case, we still get a performance benefit from the improved \gls{rewriting} time, but not from incremental solving.
|
||||||
|
|
||||||
\item Using a \textit{warm-starting} interface, the \solver{} is reinitialised with the updated \gls{slv-mod} as above, but it is also given a previous \gls{sol} to initialise some internal data structures.
|
\item Using a \textit{warm-starting} interface, the \solver{} is reinitialized with the updated \gls{slv-mod} as above, but it is also given a previous \gls{sol} to initialize some internal data structures.
|
||||||
In particular for mathematical programming \solvers{}, this can result in dramatic performance gains compared to ``cold-starting'' the \solver{} every time.
|
In particular for mathematical programming \solvers{}, this can result in dramatic performance gains compared to ``cold-starting'' the \solver{} every time.
|
||||||
|
|
||||||
\item Using a fully incremental interface, the \solver{} is instructed to apply the changes made by the \interpreter{}.
|
\item Using a fully incremental interface, the \solver{} is instructed to apply the changes made by the \interpreter{}.
|
||||||
@ -553,11 +553,11 @@ We therefore define the following three interfaces, using which we can apply the
|
|||||||
|
|
||||||
In this section we present two experiments to test the efficiency and potency of the incremental methods introduced in this chapter.
|
In this section we present two experiments to test the efficiency and potency of the incremental methods introduced in this chapter.
|
||||||
|
|
||||||
In our first experiment, we consider the effectiveness \gls{meta-optimization} within during solving.
|
Our first experiment considers the effectiveness \gls{meta-optimization} within during solving.
|
||||||
In particular, we investigate a round-robin \gls{lns} implemented using \gls{rbmo}.
|
In particular, we investigate a round-robin \gls{lns} implemented using \gls{rbmo}.
|
||||||
On three different \minizinc{} models we compare this approach with solving the \instances{} directly using 2 different \solvers{}.
|
On three different \minizinc{} models we compare this approach with solving the \instances{} directly using 2 different \solvers{}.
|
||||||
For one of the \solvers{}, we also compare with an oracle approach that can directly apply the exact same \gls{neighbourhood} as our \gls{rbmo}, without the need for computation.
|
For one of the \solvers{}, we also compare with an oracle approach that can directly apply the exact same \gls{neighbourhood} as our \gls{rbmo}, without the need for computation.
|
||||||
As such, we show that the use of \gls{rbmo} introduces a insignificant computational overhead.
|
As such, we show that the use of \gls{rbmo} introduces an insignificant computational overhead.
|
||||||
|
|
||||||
Our second experiment compares the performance of using \minizinc{} incrementally.
|
Our second experiment compares the performance of using \minizinc{} incrementally.
|
||||||
We compare our two methods, \gls{rbmo} and the incremental constraint modelling interface, against the baseline of continuously \gls{rewriting} and reinitialising the \solver{}.
|
We compare our two methods, \gls{rbmo} and the incremental constraint modelling interface, against the baseline of continuously \gls{rewriting} and reinitialising the \solver{}.
|
||||||
@ -619,7 +619,7 @@ The main decisions are to assign courses to periods, which is done via the \vari
|
|||||||
\caption{\label{fig:inc-obj-gbac}\gls{gbac}: integral of cumulative objective value of solving 5 \instances{}.}
|
\caption{\label{fig:inc-obj-gbac}\gls{gbac}: integral of cumulative objective value of solving 5 \instances{}.}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
The result for the \gls{gbac} in \cref{fig:inc-obj-gbac} show that the overhead introduced by \gls{gecode} using \gls{rbmo} with regards to the replaying the \glspl{neighbourhood} is quite low.
|
The result for the \gls{gbac} in \cref{fig:inc-obj-gbac} show that the overhead introduced by \gls{gecode} using \gls{rbmo} in regard to the replaying the \glspl{neighbourhood} is quite low.
|
||||||
The lines in the graph do not show any significant differences or delays.
|
The lines in the graph do not show any significant differences or delays.
|
||||||
Both their result are much better than the baseline \gls{gecode} \solver{}.
|
Both their result are much better than the baseline \gls{gecode} \solver{}.
|
||||||
Since learning is not very effective for \gls{gbac}, the performance of \gls{chuffed} is similar to \gls{gecode}.
|
Since learning is not very effective for \gls{gbac}, the performance of \gls{chuffed} is similar to \gls{gecode}.
|
||||||
@ -627,7 +627,7 @@ The use of \gls{lns} again significantly improves over standard \gls{chuffed}.
|
|||||||
|
|
||||||
\subsubsection{Steel Mill Slab}
|
\subsubsection{Steel Mill Slab}
|
||||||
|
|
||||||
The steel mill slab design problem consists of cutting slabs into smaller ones, so that all orders are fulfilled while minimising the wastage.
|
The steel mill slab design problem consists of cutting slabs into smaller ones, so that all orders are fulfilled while minimizing the wastage.
|
||||||
The steel mill only produces slabs of certain sizes, and orders have both a size and a colour.
|
The steel mill only produces slabs of certain sizes, and orders have both a size and a colour.
|
||||||
We have to assign orders to slabs, with at most two different colours on each slab.
|
We have to assign orders to slabs, with at most two different colours on each slab.
|
||||||
The model uses the \variables{} \mzninline{assign} for deciding which order is assigned to which slab.
|
The model uses the \variables{} \mzninline{assign} for deciding which order is assigned to which slab.
|
||||||
@ -655,7 +655,7 @@ These orders can then be freely reassigned to any other slab.
|
|||||||
|
|
||||||
\Cref{subfig:inc-obj-gecode-steelmillslab} again only show minimal overhead for the \gls{rbmo} compared to replaying the \glspl{neighbourhood}.
|
\Cref{subfig:inc-obj-gecode-steelmillslab} again only show minimal overhead for the \gls{rbmo} compared to replaying the \glspl{neighbourhood}.
|
||||||
For this problem a solution with zero wastage is always optimal.
|
For this problem a solution with zero wastage is always optimal.
|
||||||
As such the \gls{lns} approaches are sometimes able to prove a \gls{sol} is optimal and might finish before the time out.
|
As such the \gls{lns} approaches are sometimes able to prove a \gls{sol} is optimal and might finish before the time-out.
|
||||||
This is the case for \gls{chuffed} instances, where almost all \instances{} are solved using the \gls{rbmo} method.
|
This is the case for \gls{chuffed} instances, where almost all \instances{} are solved using the \gls{rbmo} method.
|
||||||
As expected, the \gls{lns} approaches find better solutions quicker for \gls{gecode}.
|
As expected, the \gls{lns} approaches find better solutions quicker for \gls{gecode}.
|
||||||
However, We do see that, given enough time, baseline \gls{gecode} will eventually find better \glspl{sol}.
|
However, We do see that, given enough time, baseline \gls{gecode} will eventually find better \glspl{sol}.
|
||||||
@ -664,7 +664,7 @@ However, We do see that, given enough time, baseline \gls{gecode} will eventuall
|
|||||||
\subsubsection{RCPSP/wet}
|
\subsubsection{RCPSP/wet}
|
||||||
|
|
||||||
The \gls{rcpsp} with Weighted Earliness and Tardiness cost, is a classic scheduling problem in which tasks need to be scheduled subject to precedence \constraints{} and cumulative resource restrictions.
|
The \gls{rcpsp} with Weighted Earliness and Tardiness cost, is a classic scheduling problem in which tasks need to be scheduled subject to precedence \constraints{} and cumulative resource restrictions.
|
||||||
The objective is to find an optimal schedule that minimises the weighted cost of the earliness and tardiness for tasks that are not completed by their proposed deadline.
|
The objective is to find an optimal schedule that minimizes the weighted cost of the earliness and tardiness for tasks that are not completed by their proposed deadline.
|
||||||
The \variables{} in \gls{array} \mzninline{s} represent the start times of each task in the model.
|
The \variables{} in \gls{array} \mzninline{s} represent the start times of each task in the model.
|
||||||
\Cref{lst:inc-free-timeslot} shows our structured \gls{neighbourhood} for this model.
|
\Cref{lst:inc-free-timeslot} shows our structured \gls{neighbourhood} for this model.
|
||||||
It randomly selects a time interval of one-tenth the length of the planning horizon and frees all tasks starting in that time interval, which allows a reshuffling of these tasks.
|
It randomly selects a time interval of one-tenth the length of the planning horizon and frees all tasks starting in that time interval, which allows a reshuffling of these tasks.
|
||||||
@ -726,7 +726,7 @@ The problem therefore has a lexicographical objective: a solution is better if i
|
|||||||
|
|
||||||
The results are shown in \cref{subfig:inc-cmp-lex}.
|
The results are shown in \cref{subfig:inc-cmp-lex}.
|
||||||
They show that both incremental methods have a clear advantage over are naive baseline approach.
|
They show that both incremental methods have a clear advantage over are naive baseline approach.
|
||||||
Although some additional time is spend \gls{rewriting} the \gls{rbmo} models compared to the incremental, it is minimal.
|
Although some additional time is spent \gls{rewriting} the \gls{rbmo} models compared to the incremental, it is minimal.
|
||||||
The \gls{rewriting} of the \gls{rbmo} \instances{} would likely take less time when using the new prototype architecture.
|
The \gls{rewriting} of the \gls{rbmo} \instances{} would likely take less time when using the new prototype architecture.
|
||||||
Between all the methods, solving time is very similar.
|
Between all the methods, solving time is very similar.
|
||||||
\gls{rbmo} seems to have a slight advantage.
|
\gls{rbmo} seems to have a slight advantage.
|
||||||
@ -735,7 +735,7 @@ No benefit can be noticed from the use of the incremental solver \gls{api}.
|
|||||||
\subsubsection{GBAC}
|
\subsubsection{GBAC}
|
||||||
|
|
||||||
We now revisit the model and method from \cref{ssubsec:inc-exp-gbac1} and compare the efficiency of using round-robin \gls{lns}.
|
We now revisit the model and method from \cref{ssubsec:inc-exp-gbac1} and compare the efficiency of using round-robin \gls{lns}.
|
||||||
Instead setting a time limit, we limit the number of \glspl{restart} that the \solver{} makes.
|
Instead, setting a time limit, we limit the number of \glspl{restart} that the \solver{} makes.
|
||||||
As such, we limit the number of \glspl{neighbourhood} that are computed and applied to the \instance{}.
|
As such, we limit the number of \glspl{neighbourhood} that are computed and applied to the \instance{}.
|
||||||
It should be noted that the \gls{rbmo} method is not guaranteed to apply the exact same \glspl{neighbourhood}, due to the difference in random number generator.
|
It should be noted that the \gls{rbmo} method is not guaranteed to apply the exact same \glspl{neighbourhood}, due to the difference in random number generator.
|
||||||
|
|
||||||
@ -751,7 +751,7 @@ The advantage in solve time using \gls{rbmo} is more pronounced here, but it is
|
|||||||
The results of our experiments show that there is a clear benefit from the use of incremental methods in \cmls{}.
|
The results of our experiments show that there is a clear benefit from the use of incremental methods in \cmls{}.
|
||||||
|
|
||||||
The \gls{meta-optimization} algorithms that can be applied using \gls{rbmo} show a significant improvement over the \solvers{} normal search.
|
The \gls{meta-optimization} algorithms that can be applied using \gls{rbmo} show a significant improvement over the \solvers{} normal search.
|
||||||
It is shown this method is very efficient and does not under perform even when compared to a unrealistic version of the methods that do not require any computations.
|
It is shown this method is very efficient and does not under perform even when compared to an unrealistic version of the methods that do not require any computations.
|
||||||
|
|
||||||
The incremental interface offers a great alternative when \solvers{} are not extended for \gls{rbmo}.
|
The incremental interface offers a great alternative when \solvers{} are not extended for \gls{rbmo}.
|
||||||
\Gls{incremental-rewriting} saves a significant amount of time, compared to repeatedly \gls{rewriting} the full \instance{}.
|
\Gls{incremental-rewriting} saves a significant amount of time, compared to repeatedly \gls{rewriting} the full \instance{}.
|
||||||
|
@ -4,19 +4,19 @@ Examples of these methods are:
|
|||||||
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item Multi-objective search \autocite{jones-2002-multi-objective}.
|
\item Multi-objective search \autocite{jones-2002-multi-objective}.
|
||||||
Optimising multiple objectives is often not supported directly in solvers.
|
Optimizing multiple objectives is often not supported directly in solvers.
|
||||||
Instead, it can be solved using a \gls{meta-optimization} approach: find a solution to a (single-objective) problem, then add more \constraints{} to the original problem and repeat.
|
Instead, it can be solved using a \gls{meta-optimization} approach: find a solution to a (single-objective) problem, then add more \constraints{} to the original problem and repeat.
|
||||||
\item \gls{lns} \autocite{shaw-1998-local-search}.
|
\item \gls{lns} \autocite{shaw-1998-local-search}.
|
||||||
This is a very successful \gls{meta-optimization} algorithm to quickly improve solution quality.
|
This is a very successful \gls{meta-optimization} algorithm to quickly improve solution quality.
|
||||||
After finding a (sub-optimal) solution to a problem, \constraints{} are added to restrict the search in the \gls{neighbourhood} of that \gls{sol}.
|
After finding a (sub-optimal) solution to a problem, \constraints{} are added to restrict the search in the \gls{neighbourhood} of that \gls{sol}.
|
||||||
When a new \gls{sol} is found, the \constraints{} are removed, and \constraints{} for a new \gls{neighbourhood} are added.
|
When a new \gls{sol} is found, the \constraints{} are removed, and \constraints{} for a new \gls{neighbourhood} are added.
|
||||||
\item Online Optimisation \autocite{jaillet-2021-online}.
|
\item Online Optimization \autocite{jaillet-2021-online}.
|
||||||
These techniques can be employed when the problem rapidly changes.
|
These techniques can be employed when the problem rapidly changes.
|
||||||
An \instance{} is continuously updated with new data, such as newly available jobs to be scheduled or customer requests to be processed.
|
An \instance{} is continuously updated with new data, such as newly available jobs to be scheduled or customer requests to be processed.
|
||||||
\item Diverse Solution Search \autocite{hebrard-2005-diverse}.
|
\item Diverse Solution Search \autocite{hebrard-2005-diverse}.
|
||||||
Here we aim to provide a set of solutions that are sufficiently different from each other in order to give human decision makers an overview of the possible \glspl{sol}.
|
Here we aim to provide a set of solutions that are sufficiently different from each other in order to give human decision makers an overview of the possible \glspl{sol}.
|
||||||
Diversity can be achieved by repeatedly solving a problem instance with different \glspl{objective}.
|
Diversity can be achieved by repeatedly solving a problem instance with different \glspl{objective}.
|
||||||
\item Interactive Optimisation \autocite{belin-2014-interactive}.
|
\item Interactive Optimization \autocite{belin-2014-interactive}.
|
||||||
In some scenarios it can be useful to allow a user to directly provide feedback on \gls{sol} found by the \solver{}.
|
In some scenarios it can be useful to allow a user to directly provide feedback on \gls{sol} found by the \solver{}.
|
||||||
The feedback in the form of \constraint{} are added back into the problem, and a new \gls{sol} is generated.
|
The feedback in the form of \constraint{} are added back into the problem, and a new \gls{sol} is generated.
|
||||||
Users may also take back some earlier feedback and explore different aspects of the problem to arrive at the best \gls{sol} that suits their needs.
|
Users may also take back some earlier feedback and explore different aspects of the problem to arrive at the best \gls{sol} that suits their needs.
|
||||||
@ -24,7 +24,7 @@ Examples of these methods are:
|
|||||||
|
|
||||||
All of these examples have in common that a \instance{} is solved, new \constraints{} are added, the resulting \instance{} is solved again, and the \constraints{} may be removed again.
|
All of these examples have in common that a \instance{} is solved, new \constraints{} are added, the resulting \instance{} is solved again, and the \constraints{} may be removed again.
|
||||||
|
|
||||||
The usage of these algorithms is not new to \cmls{} and they have proven to be very useful \autocite{schrijvers-2013-combinators, rendl-2015-minisearch, schiendorfer-2018-minibrass, ek-2020-online, ingmar-2020-diverse}.
|
The usage of these algorithms is not new to \cmls{}, and they have proven to be very useful \autocite{schrijvers-2013-combinators, rendl-2015-minisearch, schiendorfer-2018-minibrass, ek-2020-online, ingmar-2020-diverse}.
|
||||||
In its most basic form, a simple scripting language is sufficient to implement these algorithms, by repeatedly \gls{rewriting} and solving the adjusted \instances{}.
|
In its most basic form, a simple scripting language is sufficient to implement these algorithms, by repeatedly \gls{rewriting} and solving the adjusted \instances{}.
|
||||||
While improvements of the \gls{rewriting} process, such as the ones discussed in previous chapters, can increase the performance of these approaches, the overhead of rewriting an almost identical model may still prove prohibitive.
|
While improvements of the \gls{rewriting} process, such as the ones discussed in previous chapters, can increase the performance of these approaches, the overhead of rewriting an almost identical model may still prove prohibitive.
|
||||||
It warrants direct support from the \cml{} architecture.
|
It warrants direct support from the \cml{} architecture.
|
||||||
@ -40,7 +40,7 @@ In this chapter we introduce two methods to provide this support:
|
|||||||
This approach can be used when an incremental method cannot be described using \gls{rbmo} or when the required extensions are not available for the target \solver{}.
|
This approach can be used when an incremental method cannot be described using \gls{rbmo} or when the required extensions are not available for the target \solver{}.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
The rest of the chapter is organised as follows.
|
The rest of the chapter is organized as follows.
|
||||||
\Cref{sec:inc-modelling} discusses the declarative modelling of \gls{rbmo} methods in a \cml{}.
|
\Cref{sec:inc-modelling} discusses the declarative modelling of \gls{rbmo} methods in a \cml{}.
|
||||||
\Cref{sec:inc-solver-extension} introduces the method to rewrite these \gls{meta-optimization} definitions into efficient \glspl{slv-mod} and the minimal extension required from the target \gls{solver}.
|
\Cref{sec:inc-solver-extension} introduces the method to rewrite these \gls{meta-optimization} definitions into efficient \glspl{slv-mod} and the minimal extension required from the target \gls{solver}.
|
||||||
\Cref{sec:inc-incremental-compilation} introduces the alternative method that extends our architecture with an incremental \constraint{} modelling interface.
|
\Cref{sec:inc-incremental-compilation} introduces the alternative method that extends our architecture with an incremental \constraint{} modelling interface.
|
||||||
|
Reference in New Issue
Block a user