54 lines
4.4 KiB
TeX
54 lines
4.4 KiB
TeX
%************************************************
|
|
\chapter{Introduction}\label{ch:introduction}
|
|
%************************************************
|
|
|
|
\noindent{}High-level \cmls{}, like \minizinc{}, were originally designed as a convenient input language for \gls{cp} solvers \autocite{marriott-1998-clp}.
|
|
A user would write a model consisting of a few loops or comprehensions; given a data file for the parameters, this would be rewritten into a relatively small set of constraints which would be fed whole into the solver.
|
|
The existing process for translating \minizinc\ into solver-specific constraints is a somewhat ad-hoc, (mostly) single-pass, recursive unrolling procedure, and many aspects (such as call overloading) are resolved dynamically.
|
|
In its original application, this was acceptable: models (both before and after translation) were small, translation was fast, and the expected results of translation were obvious.
|
|
The current architecture is illustrated in \Cref{sfig:4-oldarch}.
|
|
|
|
But over time, the uses of high-level \cmls{} have expanded greatly from this original vision.
|
|
It is now used to generate input for wildly different solver technologies: not just \gls{cp}, but also \gls{mip} \autocite{wolsey-1988-mip}, \gls{sat} \autocite{davis-1962-dpll} and \gls{cbls} \autocite{bjordal-2015-fzn2oscarcbls} solvers.
|
|
Crucially, the same constraint model can be used with any of these solvers, which is achieved through the use of solver-specific libraries of constraint definitions.
|
|
In \minizinc{}, these solver libraries are written in the same language.
|
|
|
|
As such, \minizinc\ turned out to be much more expressive than expected, so more and more preprocessing and model transformation has been added to both the core \minizinc\ library, and users' models.
|
|
And in addition to one-shot solving, \minizinc\ is frequently used as the basis of a larger meta-optimisation tool chain: programmatically generating a model, solving it, then using the results to generate another slightly different model.
|
|
|
|
To a great extent, this is testament to the effectiveness of the language.
|
|
However, as they have become more common, these extended uses have revealed weaknesses of the existing \minizinc\ tool chain.
|
|
In particular:
|
|
|
|
\begin{itemize}
|
|
|
|
\item The \minizinc\ compiler is inefficient.
|
|
It does a surprisingly large amount of work for each expression (especially resolving sub-typing and overloading), which may be repeated many times --- for example, inside the body of a comprehension.
|
|
And as models generated for other solver technologies (particularly \gls{mip}) can be quite large, the resulting flattening procedure can be intolerably slow.
|
|
As the model transformations implemented in \minizinc\ become more sophisticated, these performance problems are simply magnified.
|
|
|
|
\item The generated models often contain unnecessary constraints.
|
|
During the transformation, functional expressions are replaced with constraints.
|
|
But this breaks the functional dependencies: if the original expression later becomes redundant (due to model simplifications), \minizinc\ may fail to detect that the constraint can be removed.
|
|
|
|
\item Monolithic flattening is wasteful.
|
|
When \minizinc\ is used for multi-shot solving, there is typically a large base model common to all sub-problems, and a small set of constraints which are added or removed in each iteration.
|
|
But with the existing \minizinc\ architecture, the whole model must be re-flattened each time.
|
|
Many use cases involve generating a base model, then repeatedly adding or removing a few constraints before re-solving.
|
|
In the current tool chain, the whole model must be fully re-flattened each time.
|
|
Not only does this repeat all the work done to flatten the base model, This means a large (sometimes dominant) portion of runtime is simply flattening the core model over and over again.
|
|
But it also prevents \emph{the solver} from carrying over anything it learnt from one problem to the next, closely related, problem.
|
|
\end{itemize}
|
|
|
|
In this thesis, we revisit the rewriting of high-level \cmls\ into solver-level constraint models and describe an architecture that allows us to:
|
|
|
|
\begin{itemize}
|
|
|
|
\item easily integrate a range of \textbf{optimisation and simplification} techniques,
|
|
|
|
\item effectively \textbf{detect and eliminate dead code} introduced by functional definitions, and
|
|
|
|
\item support \textbf{incremental flattening and solving}, and better integration with solvers providing native incremental features.
|
|
|
|
\end{itemize}
|