Initial drafts of the summary sections

This commit is contained in:
Jip J. Dekker 2021-07-27 17:57:05 +10:00
parent edf81a9b77
commit dfa29c41d0
No known key found for this signature in database
GPG Key ID: 517DF4A00618C9C3
4 changed files with 53 additions and 34 deletions

View File

@ -1620,3 +1620,15 @@ For the standard \flatzinc{} \constraints{}, it employs \gls{propagation} method
In the current implementation, the \minizinc{} \compiler{} propagates mostly Boolean \constraints{} in this phase. In the current implementation, the \minizinc{} \compiler{} propagates mostly Boolean \constraints{} in this phase.
It tries to reduce the number of Boolean \variables{} and tries to reduce the number of literals in clauses and conjunctions. It tries to reduce the number of Boolean \variables{} and tries to reduce the number of literals in clauses and conjunctions.
\section{Summary}
This chapter summarised the key knowledge to understand the \gls{rewriting} \minizinc{} \instances{} to \glspl{slv-mod}.
It introduced the practice of \constraints{} modelling, and the syntax of the \minizinc{} language.
We also compared \minizinc{} to other \cmls{} and found many similarities.
This indicates that the research presented in this thesis could be used for these languages as well.
Using \cmls{}, a modeller can easily express a problem for a variety of \solver{} programs.
We gave a brief overview of the main methods used by these \solvers{} and their problem formats, to which a \cmodel{} must be rewritten.
Finally, we discussed the \gls{rewriting} process central to \cmls{} in more detail, focusing on the \gls{rewriting} conducted by the current implementation of \minizinc{}.
The next chapter presents a more efficient architecture to perform the \gls{rewriting} from \minizinc{} \instances{} to \glspl{slv-mod}, based on a set of formalised rewriting rules that allows better reasoning about functional \constraints{}.

View File

@ -848,27 +848,18 @@ We are convinced, however, that we can get closer to its performance given the r
on recursive functions: Takeuchi, Fibonacci, and Ackermann.} on recursive functions: Takeuchi, Fibonacci, and Ackermann.}
\end{figure} \end{figure}
% \section{Summary}% \section{Summary}%
% \label{sec:rew-summary} \label{sec:rew-summary}
% In this chapter, we introduced a systematic view of the execution of \minizinc{}, revisiting the rewriting from high-level \minizinc{} to solver-level \flatzinc{}. This chapter presented a new architecture for \gls{rewriting} of \minizinc{} \instances{} to \glspl{slv-mod}.
% We first introduced the intermediate languages \microzinc{} and \nanozinc{} and explained how \minizinc{} can be transformed into a set of \microzinc{} definitions and a \nanozinc{} program. We introduced the intermediate languages \microzinc{} and \nanozinc{}, central to the \gls{rewriting} architecture.
% We then, crucially, discussed how to partially evaluate a \nanozinc{} program using \microzinc{} definitions and present formal definitions of the rewriting rules applied during partial evaluation. \minizinc{} is transformed into a set of \microzinc{} definitions and a \nanozinc{} program.
Then, a \nanozinc{} program is partially evaluated using \microzinc{} definitions.
We presented formal \gls{rewriting} rules applied during this partial evaluation.
% Continuously applying these rules would result in a correct solver-level model, but it is unlikely to be the best model for the solver. Continuously applying these rules would result in a correct, but inefficient \gls{slv-mod}.
% We, therefore, discuss multiple techniques to improve the \gls{slv-mod} during the partial evaluation of \nanozinc{}. Therefore, we extended we the partial evaluation using well-known techniques to improve the \gls{slv-mod}: \gls{propagation} is used throughout the rewriting process (possibly to \gls{fixpoint}), calls can be delayed until more information is known, \gls{cse} is used both when evaluating calls and during the \minizinc{} to \microzinc{} transformation, and \constraints{} can be aggregated when beneficial for the \solver{}.
% First, we introduce a novel technique to eliminate all unused variables and \constraints{}: we track all \constraints{} that are introduced only to define a variable. Crucially, the system can correctly remove all unused and irrelevant \variables{} and \constraints{} by \nanozinc{}'s mechanism to attach \constraints{} to the \variable{} they define.
% This means we can keep an accurate count of when a variable becomes unused by counting only the references to a variable in \constraints{} that do not help define it. The prototype of this architecture is shown to be significantly faster than the current implementation of \minizinc{}.
% Then, we discuss the use of \gls{propagation} during the partial evaluation of the \nanozinc{} program.
% This technique can help shrink the domains of the decision variable or even combine variables known to be equal.
% When a redefinition of a \constraint{} requires the introspection into the current domain of a variable, it is often important to have the tightest possible domain.
% Hence, we discuss how in choosing the next \nanozinc{} \constraint{} to rewrite, the interpreter can sometimes delay the rewriting of certain \constraints{} to ensure the most amount of information is available.
% \Gls{cse}, our next optimization technique, ensures that we do not create or evaluate the same \constraint{} or function twice and reuse variables where possible.
% Finally, the last optimization technique we discuss is the use of \gls{aggregation}.
% The use of \gls{aggregation} ensures that individual functional \constraints{} can be collected and combined into an aggregated form.
% This allows us to avoid the existence of intermediate variables in some cases.
% This optimization is very important for \gls{mip} solvers.
% Finally, we test the described system using an experimental implementation. The next chapter further extends the architecture to better reason about \gls{reif}, to fully avoid it or use \gls{half-reif} where possible.
% We compare this experimental implementation against the current \minizinc{} interpreter, version 2.5.3, and look at both runtime and its memory usage.
% Although the experimental implementation there are instances where the experimental implementation uses more memory than the current \minizinc{} interpreter, it clearly outperforms the current \minizinc{} interpreter in terms of time.

View File

@ -1191,5 +1191,20 @@ Through the use of \gls{half-reif}, \gls{openwbo} is able to find and prove the
It negatively impacts one \instance{}, for which find a \gls{sol} is not found any more. It negatively impacts one \instance{}, for which find a \gls{sol} is not found any more.
That the effect is so positive is surprising since its \gls{rewriting} statistics for \gls{maxsat} showed the least amount of change. That the effect is so positive is surprising since its \gls{rewriting} statistics for \gls{maxsat} showed the least amount of change.
% \section{Summary} \section{Summary}
% \label{sec:half-summary} \label{sec:half-summary}
The usage of \gls{reified} \constraints{} can be detrimental to \solver{} and the \gls{rewriting} process should use non-\gls{reified} or \gls{half-reif} \constraints{} when possible.
This chapter presented a framework to reason about \gls{reif} during the \gls{rewriting} process.
It provided a \microzinc{} analysis that associates each expression with a certain context.
Depending on the context, we then know whether a \constraint{} can remain non-\gls{reified}, can be \gls{half-reified}, or must be \gls{reified}.
Notably, the best context cannot always be determined without a complete \gls{parameter-assignment} and \gls{propagation}.
We showed how this can be corrected through the definitions of the \gls{reif} predicate.
Finally, we adapted the simplification techniques used during \gls{rewriting} to the use of \gls{half-reif}: we corrected \gls{cse} to take the context of an expression into account, and we introduced a new simplification technique, \gls{chain-compression}, to eliminate \glspl{implication-chain} introduced by \gls{half-reif}.
To make full use of \gls{half-reif}, we extended \gls{chuffed} with two \glspl{propagator} for \gls{half-reif} \constraints{} that were shown to be effective for certain models.
We also extended the \minizinc{} \gls{linearization} and \gls{booleanization} libraries to make use of \gls{half-reif} in their \glspl{decomp}.
Although the results for the \gls{mip} \solvers{} were mixed, the results for the \gls{maxsat} \solver{} were very encouraging.
Another significant aspect of this thesis is to improve the incremental usage of \constraint{} modelling.
The next chapter will present both a method to model \gls{meta-optimization} algorithms directly in \minizinc{}, and a method to extend the \gls{rewriting} architecture with an incremental interface.

View File

@ -813,16 +813,17 @@ The advantage in solve time using \gls{rbmo} is more pronounced here, but it is
\section{Summary} \section{Summary}
\todo{summary of contributions} This chapter presented two novel methods that strive towards eliminating the overhead of \cmls{} for the use of \gls{meta-optimization}.
Primarily, we introduce \gls{rbmo}, a method that allows modellers to describe \gls{meta-optimization} algorithms in \minizinc{} as applying a function on every restart.
This method is intuitive and can be used to describe many well-known \gls{meta-optimization} techniques.
Crucially, the \gls{rbmo} definition created by the modeller can be rewritten into the \gls{slv-mod}.
Although new \solver{} functionality is required, this method eliminates the need for repeated \gls{rewriting}.
We have shown that this method is very efficient, even when compared to an unrealistic ``oracle'' approach that does not require any computations.
The results of our experiments show that there is a clear benefit from the use of incremental methods in \cmls{}. When required, however, the architecture presented in this thesis can efficiently perform \gls{incremental-rewriting}.
The architecture provides an incremental interface that allows the modeller add and later retract \variables{} and \constraints{} in chronological order.
Internally, this \gls{rewriting} process is extended to maintain a \gls{trail} to be able to retract \constraints{} and minimize the portion of the \instance{} to be rewritten again.
\Gls{incremental-rewriting} is shown to save a significant amount of time, compared to repeatedly \gls{rewriting} the full \instance{}.
We also discussed how incremental changing \gls{slv-mod} can be communicated to the \solver{} at different levels, but could not show a clear benefit of the incremental communication.
The \gls{meta-optimization} algorithms that can be applied using \gls{rbmo} show a significant improvement over the normal search of the \solvers{}. The next and final chapter presents the conclusions of this thesis, which reiterate the discoveries and contributions of this research to theory and practice, comment on the scope and limitations of the presented architecture, and presents further avenues for research in this area.
We have shown that this method is very efficient and does not under-perform, even when compared to an unrealistic version of the methods that do not require any computations.
The incremental interface offers a great alternative when \solvers{} are not extended for \gls{rbmo}.
\Gls{incremental-rewriting} saves a significant amount of time, compared to repeatedly \gls{rewriting} the full \instance{}.
In our experiments, after the initial \gls{rewriting} of the \instance{} the additional \gls{rewriting} to apply a \gls{neighbourhood} often becomes insignificant compared to the time required by the \solver{}.
However, the incremental \solver{} \gls{api}, used in this approach, did not show any benefit.
Although we are still convinced of its usefulness, it was not visible in the results of our experiments.
It might be considered whether these use cases (or their implementation) limited its effect in any way, or whether its effect will be more pronounced in other types of \solvers{}, where more information can be saved.