This repository has been archived on 2025-03-06. You can view files and clone it, but cannot push or open issues or pull requests.
dekker-phd-thesis/chapters/4_rewriting.tex

71 lines
3.6 KiB
TeX

%************************************************
\chapter{Rewriting Constraint Modelling Languages}\label{ch:rewriting}
%************************************************
\section{Experiments}
We have created a prototype implementation of the architecture presented in the
preceding sections. It consists of a compiler from \minizinc\ to \microzinc, and
an incremental \microzinc\ interpreter producing \nanozinc. The system supports
a significant subset of the full \minizinc\ language; notable features that are
missing are support for set and float variables, option types, and compilation
of model output expressions and annotations. We will release our implementation
under an open-source license and can make it available to the reviewers upon
request.
The implementation is not optimised for performance yet, but was created as a
faithful implementation of the developed concepts, in order to evaluate their
suitability and provide a solid baseline for future improvements. In the
following we present experimental results on basic flattening performance as
well as incremental flattening and solving that demonstrate the efficiency
gains that are possible thanks to the new architecture.
\subsection{Basic Flattening}
As a first experiment, we selected 20 models from the annual \minizinc\
challenge and compiled 5 instances of each model to \flatzinc, using the current
\minizinc\ release version 2.4.3 and the new prototype system. In both cases we
use the standard \minizinc\ library of global constraints (i.e., we decompose
those constraints rather than using solver built-ins, in order to stress-test
the flattening). We measured pure flattening time, i.e., without time required
to parse and typecheck in version 2.4.3, and without time required for
compilation to \microzinc\ in the new system (compilation is usually very fast).
Times are averages of 10 runs. \footnote{All models obtained from
\url{https://github.com/minizinc/minizinc-benchmarks}:
\texttt{\justify{}accap, amaze, city-position, community-detection,
depot-placement, freepizza, groupsplitter, kidney-exchange, median-string,
multi-knapsack, nonogram, nside, problem, rcpsp-wet, road-cons, roster,
stack-cuttingstock, steelmillslab, train, triangular, zephyrus}.}
\Cref{sfig:compareruntime} compares the flattening time for each of the 100
instances. Points below the line indicate that the new system is faster. On
average, the new system achieves a speed-up of $2.3$, with very few instances
not achieving any speedup. In terms of memory performance
(\Cref{sfig:comparemem}), version 2.4.3 can sometimes still outperform the new
prototype. We have identified that the main memory bottlenecks are our currently
unoptimised implementations of CSE lookup tables and argument vectors.
These are very encouraging results, given that we are comparing a largely
unoptimised prototype to a mature piece of software.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\columnwidth]{assets/img/4_compareruntime}
\caption{flattening run time (ms)}
\label{sfig:compareruntime}
\end{subfigure}%
\hspace{0.04\columnwidth}%
\begin{subfigure}[b]{.48\columnwidth}
\centering
\includegraphics[width=\columnwidth]{assets/img/4_comparememory}
\caption{flattening memory (MB)}
\label{sfig:comparemem}
\end{subfigure}
\caption{Performance on flattening 100 MiniZinc Challenge instances.
\minizinc\ 2.4.3 (x-axis) vs new architecture (y-axis), log-log plot. Dots
below the line indicate the new system is better.}
\label{fig:runtime}
\end{figure}