Next introduction pass

This commit is contained in:
Jip J. Dekker 2021-07-07 16:11:33 +10:00
parent 595ca2246a
commit a139905a20
No known key found for this signature in database
GPG Key ID: 517DF4A00618C9C3
4 changed files with 219 additions and 106 deletions

View File

@ -194,5 +194,16 @@
tabsize=2,
]{text}}{\end{minted}}
\newcommand{\amplfile}[1]{\inputminted[autogobble=true,breaklines,breakindent=4em,numbers=left,escapeinside=@@,fontsize=\scriptsize,tabsize=2]{ampl}{#1}}
\newenvironment{ampl}{\VerbatimEnvironment{}\begin{minted}[
autogobble=true,
breaklines,
breakindent=4em,
numbers=none,
escapeinside=@@,
fontsize=\scriptsize,
tabsize=2,
]{ampl}}{\end{minted}}
% Some fixes to be loaded at the end
\usepackage{scrhack}

View File

@ -32,6 +32,8 @@
\newcommand*{\halfreified}{\gls{half-reified}\xspace{}}
\newcommand*{\solvers}{\glspl{solver}\xspace{}}
\newcommand*{\solver}{\gls{solver}\xspace{}}
\newcommand*{\Solvers}{\Glspl{solver}\xspace{}}
\newcommand*{\Solver}{\Gls{solver}\xspace{}}
\newcommand*{\variable}{\gls{variable}\xspace{}}
\newcommand*{\Variable}{\Gls{variable}\xspace{}}
\newcommand*{\variables}{\glspl{variable}\xspace{}}

View File

@ -2,126 +2,224 @@
\chapter{Introduction}\label{ch:introduction}
%************************************************
\noindent{} Many problems can be described as making a number of decisions according to a set of rules, or \constraints{}.
These \emph{decision problems} are some of the hardest computational problems.
Even with the fastest computer there is no simple way to, for example, decide on the schedule for a country's train system or the stand-by locations for the ambulances in the region.
These kinds of problems get even more complex when we consider \emph{optimisation problems}: if there are multiple solution, one might be preferred over the other.
\noindent{}A \glspl{dec-prb} is any problem that requires us to make decisions according to a set of rules.
Many important and difficult problems in the real-world are \glspl{dec-prb}.
We can think, for instance, about the decision on the country's train system or the stand-by locations for ambulances in the region.
Formally, we define a \gls{dec-prb} as a set of \variables{} subject to a set of logical \constraints{}.
A \gls{sol} to such a problem is the \gls{assignment} of all \variables{} to values that abide by the logic of the \constraints{}.
These problems are also highly computationally complex: even with the fastest computers there is no simple way to find a solution.
They get even more complex when we consider \gls{opt-prb}: if there are multiple solution, one might be preferred over the other.
But, although these problems might be hard to solve, finding a (good) solution for these problems is essential in many walks of life.
The field of \gls{or} uses advanced computational methods to help make (better) decisions.
And, although some problems require the creation of specialised algorithms, more often than not problems are solved by reducing them to another problem.
There are famous problems, like \gls{sat} \autocite{biere-2021-sat} and \gls{mip} \autocite{wolsey-1988-mip}, for which over the years highly specialised \solvers{} have been created to find solutions.
Because of universality of these problems and the effectiveness of their \solvers{}, formulating a new problem in terms of on of these famous problems is often the best way to find a solution.
This reformulation process presents various challenges:
Famous classes decision and \glspl{opt-prb}, such as \gls{sat} \autocite{biere-2021-sat}, \gls{cp} \autocite{rossi-2006-cp} and \gls{mip} \autocite{wolsey-1988-mip}, have been studied extensively.
And, over the years, highly specialised \solvers{} have been created to find \glspl{sol} for these classes of problems.
Nowadays, most decision and \glspl{opt-prb} problems are solved by encoding them to a well known class of problems.
\begin{itemize}
Encoding one problem in terms of another is, however, a difficult problem.
The problem classes can be restrictive: the input to the \solver{}, the \gls{slv-mod}, can only contain \gls{native} \variables{} and \constraints{}.
This means that types of \variables{} and relationship in \constraints{} have to be directly supported the solver.
Furthermore, \constraints{} can only refer to \variables{}, they cannot be (directly) dependent on other \constraints{}.
For example, \gls{sat} \solvers{} can only reason about Boolean \variables{}, deciding if something is true or false.
Its \constraints{} have to be in the form of disjunctions of Boolean \variables{}, or their negations.
\item The problem likely reasons at a different level than the targeted \solver{}.
For example, expressing the rules of a train schedule in term of Boolean logic (for a \gls{sat} \solver{}) is a complicated process.
But, not only does the encoding have to be correct, the encoding also has to be performant.
There are often many possible correct encodings of a problem, but there can be a significant difference in how quickly the \solver{} will find a \gls{sol} for them.
The preferred encoding can however differ between \solvers{}.
Two \solvers{} designed to solve the same problem class might perform very differently.
\item There are many possible formulations of the problem, but there is a significant difference in how quickly in how quickly it can be solved.
\Cmls{} have been designed to tackle these issues.
They serve as a level of abstraction between the user and the \solver{}.
Instead of providing a flat list of \variables{} and \constraints[], the user can create a \cmodel{} using the more natural syntax of the \cml{}.
A \cmodel{} can generally describe a class of problems through the use of \parameters{}, values assigned by external input.
Once given a complete assignment of the \parameters{}, the \cmodel{} forms an \instance{}.
The language will then create a \gls{slv-mod}, through a process called \gls{rewriting}, and interface with the \solver{} when trying to find an appropriate \gls{sol}.
\item All \solvers{} are different. Even when two \solvers{} are designed to solve the same problem set, they might perform substantially better when specific formulations are used or support slight extensions of the original problem set.
A commonly used \cml{} is \glsxtrshort{ampl} \autocite{fourer-2003-ampl}.
\glsxtrshort{ampl} was originally designed as a common interface between different \gls{mip} \solvers{}.
The language provides a natural way to define numeric \variables{} and express \constraints{} in the form of linear equations as described by the class of problem.
Crucially, the same \glsxtrshort{ampl} model can be used between different \solvers{}.
\glsxtrshort{ampl} was later extended to include other \solver{} targets, including \gls{cp} and quadratic \solvers{}.
As such, additional the types of \constraints{} for these problem classes have been added to the language, removing the restriction that \constraints{} must be linear equations.
\end{itemize}
Let us introduce the \glsxtrshort{ampl} language by modelling the open shop problem.
In the open shop problem we are tasked with scheduling jobs with multiple tasks.
Each task must be performed by an assigned machine.
A machine can only perform one task at the same time.
And, only one task of the same job can be scheduled at the same time.
We assume that each job has a task for every machine.
As an \gls{opt-prb}, our goal is to find an schedule that minimises the finishing time of the last task.
\Cmls{}, like \gls{ampl} \autocite{fourer-2003-ampl}, \gls{opl} \autocite{van-hentenryck-1999-opl}, \minizinc{} \autocite{nethercote-2007-minizinc}, and \gls{essence} \autocite{frisch-2007-essence}, have been designed to tackle these problems.
They define a common language that can be used between different \solvers{}.
So instead of defining the problem for a \solver{}, the user creates formalises the problem in the \cml{}, creating a \emph{model} of the problem.
The language is then tasked with creating an optimised formulation for the targeted \solver{}.
Some of these language even allow the modeller to use common patterns, \glspl{global}, that are not directly supported by the solver.
These \emph{High-level} languages will then \emph{decompose} these \glspl{global} into an efficient structure for the targeted solver.
The usage of \cmls{}, especially high-level languages, can simplify the process and reduce the amount of expert knowledge that is required about the \solver{}.
\begin{listing}
\pyfile{assets/misc/intro_open_shop.mod}
\caption{\label{lst:intro-open-shop} An \glsxtrshort{ampl} model of the open shop problem}
\end{listing}
The problem, however, is that high-level \cmls{} were originally designed to target \gls{cp} problems.
Extraordinarily, instances of \gls{cp} problems can directly include many of the \glspl{global} available in the language.
This meant that only little effort was required to rewrite a model for the targeted solver.
But over time these languages have expanded greatly from this original vision.
They are now used to generate input for wildly different solver technologies: not just \gls{cp}, but also \gls{mip} and \gls{sat}.
Crucially, for these \solver{} types the same \glspl{global} have to be replaced by much bigger structures in the problem specification.
This means more and more processing and model transformation has been required to rewrite models into a specification for the targeted \solver{}.
Furthermore, new meta-optimisation techniques are becoming increasingly prominent.
Instead of creating a single model that describes the full problem, these techniques programmatically generate a model, solve it, and then using the results generate another slightly different model.
This results in the rewriting of an abundance of related models in quick succession As such, the rewriting process has become an obstacle for both sizable models and models that require incremental changes.
\Cref{lst:intro-open-shop} shows an \glsxtrshort{ampl} model for the open shop problem.
In order of occurence, \lrefrange{line:intro:param:start}{line:intro:param:end} show the declarations of the parameters.
To create an \instance{} of the problem, the user provides the number of jobs and machines that are considered, and the duration of each task.
\Lrefrange{line:intro:var:start}{line:intro:var:end} show the \variable{} declarations: for each task we decide on its start time.
Additionally, we declare the end time of the last task as a \variable{}, to ease the modelling of the problem.
This \variable{} is made to be our optimisation goal on \lref{line:intro:goal}.
Finally, \lrefrange{line:intro:con:start}{line:intro:con:end} express the \constraints{} of our problem in terms of equations bound by logic.
In this thesis we revisit the rewriting of high-level \cmls\ into \solver{}-level problem specifications.
It is our aim to design and test an architecture for \cmls{} that can accommodate the modern uses of these languages.
The \glsxtrshort{ampl} model provides a clear definition of the problem class, but it can be argued that its meaning is hard to decipher.
\glsxtrshort{ampl} does not provide any way to capture common concepts, such as one task preceding another or that two tasks cannot overlap in our example.
Additionally, the process to encode an \instance{} into a \gls{slv-mod} all happens ``behind the scenes''.
There is no way for a \solver{} to specify how, for example, an operator is best rewritten.
As such, \glsxtrshort{ampl} cannot rewrite all models for all its \solvers{}.
For example, since the model in \cref{lst:intro-open-shop} uses a \mzninline{or} operator, it can only be encoded for \solvers{} that support their \gls{cp} interface.
\section{MiniZinc}
Although they do support the rewriting of models between different problem classes, other \cmls{}, such as \gls{essence} \autocite{frisch-2007-essence} and \glsxtrshort{opl} \autocite{van-hentenryck-1999-opl}, exhibit the same problems.
They do not provide any way to capture common concepts.
And, apart from adapting the rewriting mechanism itself, there is no way for a \solver{} to influence their preferred encoding.
One of the most prominent high-level \cmls{} is \minizinc{}.
In this thesis, we choose \minizinc{} as the primary \cml{} for several reasons.
First, the \minizinc{} language is one of the most extensive \cmls{}.
It contains features such as annotations and user-defined functions, not found in other \cmls{}.
This means that, despite the usage of \minizinc{}, the methods explored in this thesis would still be applicable to other \cmls{}.
Second, because of the popularity and maturity of the language, there is a large suite of models available that can be used as benchmarks.
Third, the language has been used in multiple studies as a host for meta-optimisation techniques \autocite{ingmar-2020-diverse,ek-2020-online}.
Finally, many of the researchers involved in the design and implementation of \minizinc{} have settled at Monash University.
People with the expertise to answer complex questions about the language were therefore close at hand.
\gls{clp}, as used in the Prolog language, offers a very different mechanism to create a \cmodel{}.
In these languages, the modeller is encouraged to create high-level concepts and provide the way in which they are rewritten into \gls{native} \constraints{}.
For example, the concepts of one task preceding another and non-overlapping of tasks could be defined in Prolog as:
\begin{minted}[
autogobble=true,
breaklines,
breakindent=4em,
numbers=none,
escapeinside=@@,
fontsize=\scriptsize,
tabsize=2,
]{prolog}
pecedes(startA, durA, startB) :-
startA + durA < startB.
nonoverlap(startA, durA, startB, durB) :-
precedes(startA, durA, startB) ; precedes(startB, durB, startA).
\end{minted}
The definition of \mzninline{nonoverlap} require that either task A precedes task B, or vice versa.
However, unlike the \gls{ampl} model, Prolog would not provide this choice to the \solver{}.
Instead, it will enforce one, test if this works, and otherwise enforce the other.
This is a powerful mechanism where any choices are made in the \gls{rewriting} process, and not in the \solver{}.
The problem with the mechanism is that it requires a highly incremental interface with the \solver{} that can incrementally post and retract \constraints{}.
Additionally, \solvers{} might not always be able to verify if a certain set of \constraints{} is consistent.
This makes the behaviour of the \gls{clp} program dependent on the \solver{} that is used.
As such, a \gls{clp} program is not \solver{}-independent.
\minizinc{} \autocite{nethercote-2007-minizinc} is a functional \cml{} that operates on a level in between these two types of languages.
Like \glsxtrshort{ampl}, it is a \solver{}-independent \cml{}.
And like \gls{clp} languages, modellers can define common concepts and control encoding of the \gls{slv-mod}.
The latter is accomplished through the use of function definitions.
For example, a user could create a \minizinc{} function to express the non-overlapping relationship:
\begin{mzn}
predicate nonoverlap(var int: startA, int: durA, var int: startB, int: durB) =
startA + durA < startB \/ startB + durB < startA;
\end{mzn}
Fundamentally, in its \gls{rewriting} process \minizinc{} is a functional language.
It will continuously evaluate the calls in a \minizinc{} \instance{} until it reaches \gls{native} \constraints{}.
Like other functional languages, \minizinc{} allows recursion.
It can be used as a fully Turing complete computational language.
Using the same mechanism, \minizinc{} defines how an \instance{} is encoded for a \solver{}.
All functionality in the \minizinc{} language is eventually expressed using function calls.
The \solver{}, then, has the decision to choose if this call is a \gls{native} \constraint{}, or how this call is rewritten.
For example the logical or-operator, that was only supported for \gls{cp} \solvers{} in \glsxtrshort{ampl}, could be defined for \gls{mip} solvers in \minizinc{} as:
\begin{mzn}
predicate bool_or(var bool: x, var bool: y) =
x + y >= 1;
\end{mzn}
\noindent{}Whereas it could be marked as a \gls{native} \constraint{} for a \gls{cp} \solver{} by not defining the body:
\begin{mzn}
predicate bool_or(var bool: x, var bool: y);
\end{mzn}
Although \minizinc{} is based on this powerful paradigm, its success has surfaced certain problems.
The language was originally designed to target \gls{cp} \solvers{}, where the result contains a small number of highly complex \constraints{}.
Its use has extended to rewrite for \gls{mip} and \gls{sat} \solvers{}.
The result of which is a \gls{slv-mod} with large number of very simple constraints, generated by a complex library of \minizinc{} functions.
For many applications, \minizinc{} now requires a significant, and sometimes prohibitive, amount of time to rewrite \instances{}.
Unlike \gls{clp} \gls{rewriting}, the \minizinc{} \gls{rewriting} process does not consider incremental changes to its \gls{slv-mod}.
This is another weakness that has become particularly important, since new optimisation methods require the solving of a sequence of closely related \instances{}.
The overhead of \gls{rewriting} all these \instances{} separately can be substantial.
In this thesis we revisit the \gls{rewriting} of functional \cmls{} into \glspl{slv-mod}.
It is our aim to design and evaluate an architecture for \cmls{} that can accommodate the modern uses of these languages.
\section{The Problems of Rewriting MiniZinc}
\minizinc{} is one of the most prominent \cmls{}.
It is an expressive language that, in addition to user-defined functions, gives modeller access to advanced features, such as many types of \variables{}, annotations, and an extensive standard library of \constraints{}.
All of which can be used with all \solver{} targets.
Because of the popularity and maturity of the language, there is a large suite of \minizinc{} models available that can be used as benchmarks.
The language has also been used in multiple studies as a host for meta-optimisation techniques \autocite{ingmar-2020-diverse,ek-2020-online}.
Once designed as a standard for different \gls{cp} solvers, \minizinc{} exhibits many of the earlier described problems.
A \minizinc{} model generally consists of a few loops or comprehensions; for a \gls{cp} solver, this would be rewritten into a relatively small set of constraints which would be fed whole into the solver.
The existing process for translating \minizinc{} into solver-specific constraints is a somewhat ad-hoc, (mostly) single-pass, recursive unrolling procedure, and many aspects (such as call overloading) are resolved dynamically.
In its original application, this was acceptable: models (both before and after translation) were small.
Now, the language is also used to target \solvers{} for low-level problems, such as \gls{sat} and \gls{mip}, and is used as part of various meta-optimisation toolchains.
The existing process for translating \minizinc{} into a \gls{slv-mod} is a somewhat ad-hoc, (mostly) single-pass, recursive unrolling procedure, and many aspects (such as call overloading) are resolved dynamically.
In its original application, this was acceptable: models (both before and after \gls{rewriting}) were small.
Now, the language is also used to target low-level \solvers{}, such as \gls{sat} and \gls{mip}.
For them, the encoding of the same \minizinc{} \instance{} results in a much larger \glspl{slv-mod}.
Additionally, more and more \minizinc{} is used as part of various meta-optimisation toolchains.
To a great extent, this is testament to the effectiveness of the language.
However, as they have become more common, these extended uses have revealed weaknesses of the existing \minizinc\ tool chain.
However, as they have become more common, these extended uses have revealed weaknesses of the existing \minizinc{} tool chain.
In particular:
\begin{itemize}
\item The \minizinc\ compiler is inefficient.
It does a surprisingly large amount of work for each expression (especially resolving sub-typing and overloading), which may be repeated many times --- for example, inside the body of a comprehension.
And as models generated for other solver technologies can be quite large, the resulting flattening procedure can be intolerably slow.
As the model transformations implemented in \minizinc\ become more sophisticated, these performance problems are simply magnified.
\item The \minizinc{} \compiler{} is inefficient.
It does a surprisingly large amount of work for each expression (especially resolving sub-typing and overloading), which may be repeated many times.
And as models generated for low-level \solver{} technologies can be quite large, the resulting \gls{rewriting} procedure can be intolerably slow.
As the model transformations implemented in \minizinc{} become more sophisticated, these performance problems are simply magnified.
\item The generated models often contain unnecessary constraints.
During the transformation, functional expressions are replaced with constraints.
But this breaks the functional dependencies: if the original expression later becomes redundant (due to model simplifications), \minizinc\ may fail to detect that the constraint can be removed.
During \gls{rewriting}, functional expressions are replaced with constraints.
But this breaks the functional dependencies: if the original expression later becomes redundant (due to model simplifications), \minizinc{} may fail to detect that the constraint can be removed.
\item Monolithic flattening is wasteful.
When \minizinc\ is used as part of a meta-optimisation toolchain, there is typically a large base model common to all sub-problems, and a small set of constraints which are added or removed in each iteration.
But with the existing \minizinc\ architecture, the whole model must be rewritten each time.
Many use cases involve generating a base model, then repeatedly adding or removing a few constraints before re-solving.
In the current tool chain, the whole model must be fully rewritten to a \solver{} specification each time.
Not only does this repeat all the work done to rewrite the base model, This means a large (sometimes dominant) portion of runtime is simply rewriting the core model over and over again.
\item The reasoning about \gls{reification} in the \minizinc{} \compiler{} is limited.
An important decision that is made during \gls{rewriting} is if a \constraint{} can be enforced directly or if it is dependent on other \constraints{}.
In the second case, a \gls{reification} has to be introduced, a often costly form of the \constraint{} that determines whether a \constraint{} holds rather than enforcing the \constraint{} itself.
It is possible, however, that further \gls{rewriting} might reveal a \gls{reification} to be unnecessary.
Currently, the \compiler{} cannot reverse any \gls{reification} decisions once they are made.
And, it also does not consider \gls{half-reif}, a cheaper alternative to \gls{reification} that is often applicable.
\item Monolithic \gls{rewriting} is wasteful.
When \minizinc{} is used as part of a meta-optimisation toolchain, there is typically a large base model common to all sub-problems, and a small set of \constraints{} which are added or removed in each iteration.
But with the existing \minizinc{} architecture, the whole model must be rewritten each time.
Many use cases involve generating a base model, then repeatedly adding or removing a few \constraints{} before re-solving.
In the current tool chain, the whole model must be fully rewritten to a \gls{slv-mod} each time.
Not only does this repeat all the work done to rewrite the base model, this means a large (sometimes dominant) portion of runtime is simply spent \gls{rewriting} the core model over and over again.
But it also prevents the \solver{} from carrying over anything it learnt from one problem to the next, closely related, problem.
\end{itemize}
In addition, Feydy et al.\ once argued for the usage of \gls{half-reif} in \gls{cp} solvers and \cmls{} \autocite*{feydy-2011-half-reif}.
Where a \gls{reification} is a technique to reason about the truth-value of a \constraint{}, as an improvement \gls{half-reif} reasons only about the logical implication of this value.
The authors even show how, for a subset of \minizinc{}, \gls{half-reif} could automatically be introduced as an alternative to \gls{reification}.
Although it is shown that \gls{half-reif} can lead to significant improvements, automatically introducing \gls{half-reif} was never used in rewriting \minizinc{} models.
In part, because the method presented for the subset of \minizinc{} did not extend to the full language.
\section{Research Objective and Contributions}
Research on the rewriting of \cmls{} has long focused on the improvement of the \solver{} specification.
For each of these improvements it can be shown that they are highly effective when used.
It is well-known that the optimisation of the \solver{} specification can lead to profound reductions in solving time.
But while each optimisation might only trigger in a fraction of models, in implementation eagerly searching for applicable optimisations can take a significant amount of time.
% All things considered, these individual
We address these issues by instead of considering optimisations individually, reconsider the rewriting process overall.
In order to adapt high-level \cmls{} for its modern day requirements, this thesis aims to \textbf{design, implement, and evaluate a modern architecture for high-level \cmls{}}.
Research into \cmls{} \gls{rewriting}, as well as other research into model transformation, has shown that it is difficult to achieve both high performance and generality.
We address these issues by reconsidering the rewriting process from the ground up to make the transformation efficient while accommodating the optimisation techniques to achieve the expected result.
In order to adapt \cmls{} for modern day requirements, this thesis aims to \textbf{design, implement, and evaluate a modern architecture for functional \cmls{}}.
Crucially, this architecture should allow us to:
\begin{itemize}
\item easily integrate a range of well-known and new \textbf{optimisation and simplification} techniques,
\item effectively manage the \solver{} specification and \textbf{detect and eliminate} parts of models that have become unused, and
\item effectively manage the \gls{slv-mod} and \textbf{detect and eliminate} parts of the model that have become unused,
\item support \textbf{incremental usage} of the \cml{} infrastructure.
\item formally \textbf{reason about \gls{reification}} to minimize its strain, and
\item support \textbf{incremental usage} of the \cml{} architecture.
\end{itemize}
In the design of this architecture, we start analysing the rewriting process from the ground up.
We first determine the foundation for the system: an execution model for the basic rewriting system.
To ensure the quality of the produced \solver{} specifications, we extend the system with many well-known optimisation techniques.
In addition, we experiment with several new approaches to optimise the resulting \solver{} specification, including the use of \gls{half-reif}.
Crucially, we ensure that the system is integrates well within meta-optimisation toolchains and experiment with several meta-optimisation applications.
In the design of this architecture, we analyse the \gls{rewriting} process from the ground up.
We first determine the foundation for the system: an execution model for the basic \gls{rewriting} system.
To ensure the quality of the produced \glspl{slv-mod}, we extend the system with many well-known optimisation techniques.
In addition, we experiment with several new approaches to optimise the resulting \glspl{slv-mod}, including the use of \gls{half-reif}.
Crucially, we ensure the full incrementalism of the system and experiment with several meta-optimisation applications to evaluate its effects.
Overall, this thesis makes the following contributions:
@ -129,49 +227,51 @@ Overall, this thesis makes the following contributions:
\item It presents a formal execution model of rewriting of the \minizinc\ language and extends this model with well-known optimisation and simplification techniques.
\item It provides a novel method of tracking of \constraints{} created as part of functional dependencies, ensuring it can be correctly removed when it is no longer required.
\item It provides a novel method of tracking of \constraints{} created as part of functional dependencies, ensuring the correct removal of dependencies no longer required.
\item It presents an analysis technique to reason about in what (refied) form a \constraints{} should be considered.
\item It presents a design and implementation of techniques to automatically introduce \gls{half-reif} of constraints in \minizinc{}.
\item It develops a technique to simplify problem specifications by efficiently eliminating implication chains.
\item It proposes two methods novel methods to reduce the required overhead of use \cmls in meta-optimisation techniques: \emph{restart-based} meta-search and the \emph{incremental} rewriting of changing models.
\item It proposes two novel methods to reduce the overhead of using \cmls{} in incremental techniques: \emph{restart-based} meta-search and the \emph{incremental} rewriting of changing models.
\end{enumerate}
\section{Organisation of the Thesis}
This thesis is partitioned into the following chapters.
This thesis is arranged into the following chapters.
Following this introductory chapter, \emph{\cref{ch:background}} reviews relevant information in the area of \cmls{}.
Following this introductory chapter, \emph{\cref{ch:background}} gives an overview of the area of \cmls{}.
First, it introduces the reader to \minizinc{}, how its models are formulated, and how they are translated to \solver{} specifications.
Then, we review different solving methods method such as \gls{sat}, \gls{mip}, and \gls{cp}.
Then, we review different solving methods such as \gls{sat}, \gls{mip}, and \gls{cp}.
This is followed by a comparison of \minizinc{} with other \cmls{}.
This chapter also reviews techniques that are closely related to \cmls{}.
We conclude this chapter with a description of the current \minizinc{} compiler and the techniques it uses to simplify the \solver{} specifications it produces.
\emph{\Cref{ch:rewriting}} presents a formal execution model for \minizinc{} and the core of our new architecture.
We introduce \microzinc{}, a minimal language to which \minizinc{} can be reduced.
We use this language to construct formal rewriting rules.
The product of applying these rules is in terms of \nanozinc{}, an abstract \solver{} specification language.
Crucially, we show how \nanozinc{} tracks the \constraints{} that define the variable such that functional definitions can correctly be removed.
This chapter also integrates well-known techniques used to simplify the \solver{} specification into the architecture.
We construct a set of formal rewriting rules for a subset of \minizinc{} called \microzinc{}
We show how any \minizinc{} model can be reduced to a \microzinc{} model and, as such, provide rewriting rules for the \minizinc{} language.
Applying the rewriting produces \nanozinc{}, an abstract \solver{} specification language.
Crucially, we show how \nanozinc{} tracks \constraints{} that define a \variable{}, and can therefore correctly remove functional definitions.
This chapter also integrates well-known techniques used to simplify \glspl{slv-mod} into the architecture.
We compare the performance of an implementation of the presented architecture against the existing \minizinc{} infrastructure.
\emph{\Cref{ch:half-reif}} continues on the path of creating better \solver{} specifications.
In this chapter, we present the first implementation of the usage of automatic \gls{half-reif}.
We consider the effects of using \gls{half-reif} both inside a \gls{cp} solver and as part of a decomposition.
We then describe a new analysis to determine when \gls{half-reif} can be used.
This extends an earlier approach that does not extend to the full \minizinc{} language.
We also comment on the influence that \gls{half-reif} has on other techniques used in the rewriting process.
We conclude this chapter by analysing performance changes incurred by the usage of this technique on both small and large scale.
\emph{\Cref{ch:half-reif}} continues on the path of improving \glspl{slv-mod}.
In this chapter, we present an formal analysis technique to reason about \gls{reification}.
This analysis can help us decide whether a \constraint{} will have to be reified.
In addition, the analysis allows us to determine whether we use \gls{half-reif}.
We thus present the first implementation of the usage of automatic \gls{half-reif}.
We conclude this chapter by analysing the impact of the usage of \gls{half-reif} for different types of \solvers{}.
\emph{\Cref{ch:incremental}} focuses on the use of meta-optimisation methods.
We introduce two methods that can be employed to describe and employ these technique.
We first present a novel technique that considers a meta-search specification in a \minizinc{} and compiles it into the \solver{} specification.
Then, we describe a method to optimise the rewriting process for incremental changes to a model.
This method ensures that no work is done for parts of the model that remain unchanged.
\emph{\Cref{ch:incremental}} focuses on the development of incremental methods for \cmls{}.
We introduce two new techniques that allow for the incremental usage of \cmls{}.
We first present a novel technique that eliminates the need for the incremental \gls{rewriting}
Instead, it integrates a meta-optimisation specification, written in the \minizinc{} language, into the \gls{slv-mod}.
Then, we describe a technique to optimise the \gls{rewriting} process for incremental changes to an \instance{}.
This method ensures that no work is done for parts of the \instance{} that remain unchanged.
We conclude this chapter by testing the performance and computational overhead of these two techniques.
Finally, \emph{\Cref{ch:conclusions}} is the concluding chapter of the thesis.
It reiterates the discoveries and contributions of this research to theory and practice, comments on the scope and limitations of the presented system, and presents further avenues for research in this area.
It reiterates the discoveries and contributions of this research to theory and practice, comments on the scope and limitations of the presented architecture, and presents further avenues for research in this area.

View File

@ -858,7 +858,7 @@ Different types of \solvers{} can also have access to different types of \constr
If we consider the well-known travelling salesman problem, then we might model this problem in \gls{ampl} as follows:
\begin{plain}
\begin{ampl}
set Cities ordered;
set Paths := {i in Cities, j in Cities: ord(i) < ord(j)};
param cost {Paths} >= 0;
@ -876,7 +876,7 @@ Different types of \solvers{} can also have access to different types of \constr
subj to SubtourElimation {k in SubSet diff {0,2**n-1}}:
sum {i in PowerSet[k], j in Cities diff PowerSet[k]: (i,j) in Paths} X[i,j] +
sum {i in PowerSet[k], j in Cities diff PowerSet[k]: (j,i) in Paths} X[j,i] >= 2;
\end{plain}
\end{ampl}
This model shows that the \gls{ampl} syntax has many features similar to \minizinc{}.
Like \minizinc{}, \gls{ampl} has an extensive expression language, which includes \gls{generator} expressions and a vast collection of \glspl{operator}.