Add separate dekker_preamble.pdf target that combines all guiding text

This commit is contained in:
Jip J. Dekker 2021-06-07 18:03:05 +10:00
parent 3559dea29b
commit 835f425d48
No known key found for this signature in database
GPG Key ID: 517DF4A00618C9C3
11 changed files with 203 additions and 146 deletions

1
.gitignore vendored
View File

@ -23,6 +23,7 @@
## Generated if empty string is given at "Please type another file name for output:" ## Generated if empty string is given at "Please type another file name for output:"
dekker_thesis.pdf dekker_thesis.pdf
dekker_preamble.pdf
## Bibliography auxiliary files (bibtex/biblatex/biber): ## Bibliography auxiliary files (bibtex/biblatex/biber):
*.bbl *.bbl

View File

@ -1,12 +1,15 @@
TARGETS = dekker_thesis.pdf TARGETS = dekker_thesis.pdf dekker_preamble.pdf
MAKE=make MAKE=make
LATEX=xelatex LATEX=xelatex
BIBTEX=biber BIBTEX=biber
MAKEGLOS=makeglossaries
BUILDDIR=build BUILDDIR=build
.PHONY: FORCE .PHONY: FORCE
main: dekker_thesis.pdf
all: $(TARGETS) all: $(TARGETS)
$(BUILDDIR)/.sync: $(BUILDDIR)/.sync:
@ -14,7 +17,7 @@ $(BUILDDIR)/.sync:
%.pdf: %.tex FORCE %.pdf: %.tex FORCE
$(info compiling $@) $(info compiling $@)
@pipenv run ./vendor/latexrun --latex-cmd=$(LATEX) --latex-args="-shell-escape --8bit" --bibtex-cmd=$(BIBTEX) --makeglossaries-cmd="makeglossaries" -O $(BUILDDIR) $< @pipenv run ./vendor/latexrun --latex-cmd=$(LATEX) --latex-args="-shell-escape --8bit" --bibtex-cmd=$(BIBTEX) --makeglossaries-cmd=$(MAKEGLOS) -O $(BUILDDIR) $<
update: update:
pipenv lock pipenv lock

View File

@ -2,66 +2,7 @@
\chapter{Rewriting Constraint Modelling Languages}\label{ch:rewriting} \chapter{Rewriting Constraint Modelling Languages}\label{ch:rewriting}
%************************************************ %************************************************
\noindent{}Rewriting a high-level constraint model down into an equivalent solver-level \input{chapters/3_rewriting_preamble}
constraint model might often seem like a simple term rewriting system. In
reality, however, simple rewriting of the model will often result in sub-optimal
solver-level model and this might result in exponentially more work for the
solver. To combat this problem many techniques have been developed to create
more efficient solver-level models such as: continuously updating variable
domains according to the constraints, correctly resolving constraint sub-typing
when variables get fixed, removing any variables and constraints that have
become unused, detecting duplicate constraints, and reusing duplicate functional
definitions.
The application of all these optimisations can, however, be time intensive
during the rewriting process. And although this was not a problem when
high-level \cmls\ were targeting \gls{cp} solvers, where the solver-level
constraint model stays relatively small, this poses a big problem for \gls{mip}
and \gls{sat} solvers, whose solver-level constraint models are significantly
larger.
In this chapter, we revisit the rewriting of high-level \cmls\ into solver-level
constraint models. We describe a new \textbf{systematic view of the execution of
\minizinc{}} and build on this to propose a new tool chain. We show how this
tool chain allows us to:
\begin{itemize}
\item efficiently rewrite high-level constraint models with \textbf{minimal
overhead},
\item easily integrate a range of \textbf{optimisation and simplification}
techniques,
\item and effectively \textbf{detect and eliminate dead code} introduced by
functional definitions
\end{itemize}
The new architecture is shown in \Cref{fig:rew-comp}. A constraint model is
first compiled into a smaller constraint language called \microzinc{},
independent of the data. After the \microzinc{} is transformed into a byte code,
it is interpreted with the data to produce \nanozinc\ code, an extension of the
existing \flatzinc\ format. The interpreter can even be made incremental: in
\cref{ch:incremental} we discuss how in meta optimisation, no recompilation is
required.
We have developed a prototype of this tool chain, and present experimental
validation of these advantages. The prototype is still very experimental, but
preliminary results suggest the new tool chain can perform flattening much
faster, and produce better models, than the current \minizinc\ compiler.
This chapter is organised as follows. \Cref{sec:4-micronano} introduces the
\microzinc\ and \nanozinc\ languages, the new intermediate representation we
propose that enables more efficient flattening. \Cref{sec:4-simplification}
describes how we can perform various processing and simplification steps on this
representation, and in \cref{sec:4-experiments} we report on the experimental
results of the prototype implementation. Finally, \Cref{sec:4-conclusion}
presents our conclusions.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/img/rew_compilation_structure}
\caption{\label{fig:rew-comp} The proposed process for the compilation of
\minizinc\ instances.}
\end{figure}
\section{\glsentrytext{microzinc} and \section{\glsentrytext{microzinc} and
\glsentrytext{nanozinc}}\label{sec:4-micronano} \glsentrytext{nanozinc}}\label{sec:4-micronano}

View File

@ -0,0 +1,60 @@
\noindent{}Rewriting a high-level constraint model down into an equivalent solver-level
constraint model might often seem like a simple term rewriting system. In
reality, however, simple rewriting of the model will often result in sub-optimal
solver-level model and this might result in exponentially more work for the
solver. To combat this problem many techniques have been developed to create
more efficient solver-level models such as: continuously updating variable
domains according to the constraints, correctly resolving constraint sub-typing
when variables get fixed, removing any variables and constraints that have
become unused, detecting duplicate constraints, and reusing duplicate functional
definitions.
The application of all these optimisations can, however, be time intensive
during the rewriting process. And although this was not a problem when
high-level \cmls\ were targeting \gls{cp} solvers, where the solver-level
constraint model stays relatively small, this poses a big problem for \gls{mip}
and \gls{sat} solvers, whose solver-level constraint models are significantly
larger.
In this chapter, we revisit the rewriting of high-level \cmls\ into solver-level
constraint models. We describe a new \textbf{systematic view of the execution of
\minizinc{}} and build on this to propose a new tool chain. We show how this
tool chain allows us to:
\begin{itemize}
\item efficiently rewrite high-level constraint models with \textbf{minimal
overhead},
\item easily integrate a range of \textbf{optimisation and simplification}
techniques,
\item and effectively \textbf{detect and eliminate dead code} introduced by
functional definitions
\end{itemize}
The new architecture is shown in \Cref{fig:rew-comp}. A constraint model is
first compiled into a smaller constraint language called \microzinc{},
independent of the data. After the \microzinc{} is transformed into a byte code,
it is interpreted with the data to produce \nanozinc\ code, an extension of the
existing \flatzinc\ format. The interpreter can even be made incremental: in
\cref{ch:incremental} we discuss how in meta optimisation, no recompilation is
required.
We have developed a prototype of this tool chain, and present experimental
validation of these advantages. The prototype is still very experimental, but
preliminary results suggest the new tool chain can perform flattening much
faster, and produce better models, than the current \minizinc\ compiler.
This chapter is organised as follows. \Cref{sec:4-micronano} introduces the
\microzinc\ and \nanozinc\ languages, the new intermediate representation we
propose that enables more efficient flattening. \Cref{sec:4-simplification}
describes how we can perform various processing and simplification steps on this
representation, and in \cref{sec:4-experiments} we report on the experimental
results of the prototype implementation. Finally, \Cref{sec:4-conclusion}
presents our conclusions.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/img/rew_compilation_structure}
\caption{\label{fig:rew-comp} The proposed process for the compilation of
\minizinc\ instances.}
\end{figure}

View File

@ -2,11 +2,7 @@
\chapter{Half Reification}\label{ch:half-reif} \chapter{Half Reification}\label{ch:half-reif}
%************************************************ %************************************************
\noindent{}In this chapter we investigate the notion of \gls{half-reif} as introduced by Feydy et al.\ \autocite*{feydy-2011-half-reif}. \input{chapters/4_half_reif_preamble}
We show that in modern \gls{cp} still benefit from the use of half-reified propagators.
We also discuss the advantages of the use of \gls{half-reif} when writing decompositions and introduce a new version of the linearisation library that enjoys these advantages.
We introduce methods to automatically detect when a expression in a \minizinc\ model can be half-reified, enabling the modellers to enjoy the advantages of half-reification without having to introduce them manually.
Finally, we discuss the effect of half-reification on earlier discussed flattening methods.
\section{Introduction to Half Reification} \section{Introduction to Half Reification}

View File

@ -0,0 +1,6 @@
\noindent{}In this chapter we investigate the notion of \gls{half-reif} as introduced by Feydy et al.\ \autocite*{feydy-2011-half-reif}.
We show that in modern \gls{cp} still benefit from the use of half-reified propagators.
We also discuss the advantages of the use of \gls{half-reif} when writing decompositions and introduce a new version of the linearisation library that enjoys these advantages.
We introduce methods to automatically detect when a expression in a \minizinc\ model can be half-reified, enabling the modellers to enjoy the advantages of half-reification without having to introduce them manually.
Finally, we discuss the effect of half-reification on earlier discussed flattening methods.

View File

@ -1,85 +1,7 @@
\chapter{Incremental Processing}\label{ch:incremental} \chapter{Incremental Processing}\label{ch:incremental}
%************************************************ %************************************************
\noindent{}In previous chapters we explored the compilation of constraint models for a \input{chapters/5_incremental_preamble}
\gls{solver} as a definitive linear process, but to solve real-world problems
\gls{meta-search} algorithms are often used. These methods usually require
solving almost the same combinatorial problem repeatedly, with only slight
modifications, thousands of times. Examples of these methods are:
\begin{itemize}
\item Multi-objective search \autocite{jones-2002-multi-objective}. Optimising
multiple objectives is often not supported directly in solvers. Instead,
it can be solved using a \gls{meta-search} approach: find a solution to
a (single-objective) problem, then add more constraints to the original
problem and repeat.
\item \gls{lns} \autocite{shaw-1998-local-search}. This is a very successful
\gls{meta-search} algorithm to quickly improve solution quality. After
finding a (sub-optimal) solution to a problem, constraints are added to
restrict the search in the \gls{neighbourhood} of that solution. When a
new solution is found, the constraints are removed, and constraints for
a new \gls{neighbourhood} are added.
\item Online Optimisation \autocite{jaillet-2021-online}. These techniques can
be employed when the problem rapidly changes. A problem instance is
continuously updated with new data, such as newly available jobs to be
scheduled or customer requests to be processed.
\item Diverse Solution Search \autocite{hebrard-2005-diverse}. Here we aim to
provide a set of solutions that are sufficiently different from each
other in order to give human decision makers an overview of the solution
space. Diversity can be achieved by repeatedly solving a problem
instance with different objectives.
\item Interactive Optimisation \autocite{belin-2014-interactive}. In some
scenarios it might be useful to allow a user to directly provide
feedback on solutions found by the solver. The feedback in the form of
constraint are added back into the problem, and a new solution is
generated. Users may also take back some earlier feedback and explore
different aspects of the problem to arrive at the best solution that
suits their needs.
\end{itemize}
All of these examples have in common that a problem instance is solved, new
constraints are added, the resulting instance is solved again, and constraints
may be removed again.
The usage of these methods is not new to \gls{constraint-modelling}, and they
have proven to be very useful \autocite{schrijvers-2013-combinators,
rendl-2015-minisearch, schiendorfer-2018-minibrass, ek-2020-online,
ingmar-2020-diverse}. In its most basic form, a simple scripting language is
sufficient to implement these methods, by repeatedly calling on the
\gls{constraint-modelling} infrastructure to compile and solve the adjusted
constraint models. While improvements of the compilation of constraint models,
such as the ones discussed in previous chapters, can increase the performance of
these approaches, the overhead of re-compiling an almost identical model may
still prove prohibitive, warranting direct support from the
\gls{constraint-modelling} infrastructure. In this chapter we introduce two
methods to provide this support:
\begin{itemize}
\item We introduce the notion of restart-based \gls{meta-search} algorithms.
Using a minimal extension to a \cml\ and its target solver, we can model
some \gls{meta-search} algorithms and compile \gls{meta-search}
algorithms into efficient solver-level specifications based on solver
restarts, avoiding re-compilation all-together.
\item Alternatively, we can add an incremental interface for adding and
removing constraints to the infrastructure of the \cml{}. Although this
does not avoid the need for re-compilation, it can reduce the work to
only the part of the constraint model that has changed. This approach
can be used when an algorithm cannot be described using restart-based
\gls{meta-search} or required extension is not available for the solver.
\end{itemize}
The rest of the chapter is organised as follows. \Cref{sec:6-modelling}
discusses the declarative modelling of restart-based \gls{meta-search}
algorithms that can be modelled directly in a \cml{}.
\Cref{sec:6-solver-extension} introduces the method to compile these
\gls{meta-search} specifications into efficient solver-level specifications that
only require a small extension of existing \glspl{solver}.
\Cref{sec:6-incremental-compilation} introduces the alternative method that
extends the \gls{constraint-modelling} infrastructure with an interface to add
and remove constraints from an existing model while avoiding recompilation.
\Cref{sec:6-experiments} reports on the experimental results of both approaches.
Finally, \Cref{sec:6-conclusion} presents the conclusions.
\section{Modelling of Restart-Based Meta-Search}\label{sec:6-modelling} \section{Modelling of Restart-Based Meta-Search}\label{sec:6-modelling}

View File

@ -0,0 +1,79 @@
\noindent{}In previous chapters we explored the compilation of constraint models for a
\gls{solver} as a definitive linear process, but to solve real-world problems
\gls{meta-search} algorithms are often used. These methods usually require
solving almost the same combinatorial problem repeatedly, with only slight
modifications, thousands of times. Examples of these methods are:
\begin{itemize}
\item Multi-objective search \autocite{jones-2002-multi-objective}. Optimising
multiple objectives is often not supported directly in solvers. Instead,
it can be solved using a \gls{meta-search} approach: find a solution to
a (single-objective) problem, then add more constraints to the original
problem and repeat.
\item \gls{lns} \autocite{shaw-1998-local-search}. This is a very successful
\gls{meta-search} algorithm to quickly improve solution quality. After
finding a (sub-optimal) solution to a problem, constraints are added to
restrict the search in the \gls{neighbourhood} of that solution. When a
new solution is found, the constraints are removed, and constraints for
a new \gls{neighbourhood} are added.
\item Online Optimisation \autocite{jaillet-2021-online}. These techniques can
be employed when the problem rapidly changes. A problem instance is
continuously updated with new data, such as newly available jobs to be
scheduled or customer requests to be processed.
\item Diverse Solution Search \autocite{hebrard-2005-diverse}. Here we aim to
provide a set of solutions that are sufficiently different from each
other in order to give human decision makers an overview of the solution
space. Diversity can be achieved by repeatedly solving a problem
instance with different objectives.
\item Interactive Optimisation \autocite{belin-2014-interactive}. In some
scenarios it might be useful to allow a user to directly provide
feedback on solutions found by the solver. The feedback in the form of
constraint are added back into the problem, and a new solution is
generated. Users may also take back some earlier feedback and explore
different aspects of the problem to arrive at the best solution that
suits their needs.
\end{itemize}
All of these examples have in common that a problem instance is solved, new
constraints are added, the resulting instance is solved again, and constraints
may be removed again.
The usage of these methods is not new to \gls{constraint-modelling}, and they
have proven to be very useful \autocite{schrijvers-2013-combinators,
rendl-2015-minisearch, schiendorfer-2018-minibrass, ek-2020-online,
ingmar-2020-diverse}. In its most basic form, a simple scripting language is
sufficient to implement these methods, by repeatedly calling on the
\gls{constraint-modelling} infrastructure to compile and solve the adjusted
constraint models. While improvements of the compilation of constraint models,
such as the ones discussed in previous chapters, can increase the performance of
these approaches, the overhead of re-compiling an almost identical model may
still prove prohibitive, warranting direct support from the
\gls{constraint-modelling} infrastructure. In this chapter we introduce two
methods to provide this support:
\begin{itemize}
\item We introduce the notion of restart-based \gls{meta-search} algorithms.
Using a minimal extension to a \cml\ and its target solver, we can model
some \gls{meta-search} algorithms and compile \gls{meta-search}
algorithms into efficient solver-level specifications based on solver
restarts, avoiding re-compilation all-together.
\item Alternatively, we can add an incremental interface for adding and
removing constraints to the infrastructure of the \cml{}. Although this
does not avoid the need for re-compilation, it can reduce the work to
only the part of the constraint model that has changed. This approach
can be used when an algorithm cannot be described using restart-based
\gls{meta-search} or required extension is not available for the solver.
\end{itemize}
The rest of the chapter is organised as follows. \Cref{sec:6-modelling}
discusses the declarative modelling of restart-based \gls{meta-search}
algorithms that can be modelled directly in a \cml{}.
\Cref{sec:6-solver-extension} introduces the method to compile these
\gls{meta-search} specifications into efficient solver-level specifications that
only require a small extension of existing \glspl{solver}.
\Cref{sec:6-incremental-compilation} introduces the alternative method that
extends the \gls{constraint-modelling} infrastructure with an interface to add
and remove constraints from an existing model while avoiding recompilation.
\Cref{sec:6-experiments} reports on the experimental results of both approaches.
Finally, \Cref{sec:6-conclusion} presents the conclusions.

View File

@ -0,0 +1,4 @@
%************************************************
\chapter{Conclusions}\label{ch:conclusions}
%************************************************

44
dekker_preamble.tex Normal file
View File

@ -0,0 +1,44 @@
\documentclass[
% TODO: book format
% paper=210mm:148mm,
% DIV=calc,
a4paper,
listof=totoc,
toc=bib,
]{scrbook}
\title{Preamble of A Modern Architecture for High-Level Constraint Modelling Languages}
\author{Jip J. Dekker}
\input{assets/packages}
\input{assets/layout}
\input{assets/shorthands}
% Bibliography preamble
\addbibresource{assets/bibliography/references.bib}
\addbibresource[label=ownpubs]{assets/bibliography/dekker_publications.bib}
% Glossary / Acronym preamble
\input{assets/glossary}
\input{assets/acronyms}
\begin{document}
\frontmatter{}
\include{chapters/0_abstract}
\mainmatter{}
\include{chapters/1_introduction}
\chapter{Rewriting Constraint Models}
\input{chapters/3_rewriting_preamble}
\chapter{Half Reification}
\input{chapters/4_half_reif_preamble}
\chapter{Incremental Processing}
\input{chapters/5_incremental_preamble}
\include{chapters/6_conclusions}
\end{document}

View File

@ -103,6 +103,7 @@ following publication:
\include{chapters/3_rewriting} \include{chapters/3_rewriting}
\include{chapters/4_half_reif} \include{chapters/4_half_reif}
\include{chapters/5_incremental} \include{chapters/5_incremental}
\include{chapters/6_conclusions}
\appendix{} \appendix{}
\include{chapters/A1_minizinc_grammar} \include{chapters/A1_minizinc_grammar}