Fix background information about bounds consistency

This commit is contained in:
Jip J. Dekker 2021-07-26 14:05:49 +10:00
parent 39faee6c40
commit ac4dfb75d7
No known key found for this signature in database
GPG Key ID: 517DF4A00618C9C3
5 changed files with 26 additions and 14 deletions

View File

@ -63,7 +63,7 @@
\newglossaryentry{binding}{
name={binding},
description={A \gls{variable} is said to have a binding \gls{domain} when it is tighter than the bounds that can be computed from its defining expression. A binding \gls{domain} is a \gls{constraint} of the overall \gls{model}.},
description={A \gls{variable} is said to have a binding \gls{domain} when it is tighter than \gls{domain} that can be computed from its defining expression. A binding \gls{domain} is a \gls{constraint} of the overall \gls{model}.},
}
\newglossaryentry{bnb}{
@ -81,10 +81,19 @@
description={The process of \gls{rewriting} a \gls{model} to a \gls{sat} or \gls{maxsat} problem.},
}
\newglossaryentry{bounds}{
name={bounds},
description={The minimal and maximum value in the \domain{} of a \variable{}, \ie{} the boundaries of the \domain{}.},
}
\newglossaryentry{bounds-con}{
name={bounds consistent},
description={A \gls{propagator} is bounds consistent when it reduces the minimum and maximum values of \domains{} that do not occur in a \gls{sol}.},
\newglossaryentry{boundsr-con}{
name={bounds($\mathbb{R}$) consistent},
description={A \gls{propagator} is bounds($\mathbb{R}$) consistent when it tightens the \gls{bounds} such that they could satisfy the \gls{constraint} if it where using rational arithmetic.},
}
\newglossaryentry{boundsz-con}{
name={bounds($\mathbb{Z}$) consistent},
description={A \gls{propagator} is bounds($\mathbb{Z}$) consistent when it tightens the \gls{bounds} such that they can satisfy the \gls{constraint}.},
}
\newglossaryentry{gls-cbc}{

View File

@ -682,9 +682,12 @@ The algorithm can require high computational complexity.
Instead, it is sometimes better to use a propagator with a lower level of consistency.
Although it does not eliminate all possible values of the domain, searching the values that are not eliminated may take less time than achieving domain consistency.
This is, for example, the case for integer linear \constraints{}: \[ \sum_{i} c_{i} x_{i} = d\] where \(c_{i}\) and \(d\) are integer \parameters{} and \(x_{i}\) are integer \variable{}.
This is, for example, the case for integer linear \constraints{} \[ \sum_{i} c_{i} x_{i} = d\] where \(c_{i}\) and \(d\) are integer \parameters{} and \(x_{i}\) are integer \variable{}.
For these \constraints{}, a realistic \gls{domain-con} \gls{propagator} cannot exist because the problem is \gls{np}-hard \autocite{choi-2006-fin-cons}.
Instead, \solvers{} generally use a \gls{bounds-con} \gls{propagator}, which guarantees only that the minimum and maximum values in the \glspl{domain} of the \variables{} are used in at least one possible \gls{assignment} that satisfies the \constraint{}.
A more feasible problem is to find the minimal and maximal values, or \gls{bounds}, for the \variables{} had they been rational numbers.
A \gls{boundsr-con} \gls{propagator} then ensures that the values in the \domain{} of the integer \variables{} are between their rational \gls{bounds}.
Note that this is a relaxation of calculating the integer \gls{bounds}, to create a \gls{boundsz-con} \gls{propagator}, which is still \gls{np}-hard.
We will see the same relaxation in mathematical programming, discussed in the next section.
Thus far, we have only considered finding \glspl{sol} for \glspl{dec-prb}.
\gls{cp} solving can, however, also be used to solve \glspl{opt-prb} using a method called \gls{bnb}.
@ -717,7 +720,7 @@ In general, a linear program can be expressed in the following form.
In this definition \(V\) and \(C\) represent the number of \variables{} and number of \constraints{} respectively.
The vector \(c\) holds the coefficients of the objective function and the matrix \(a\) holds the coefficients for the \constraints{}.
The vectors \(l\) and \(u\) respectively contain the lower and upper bounds of the \constraints{}.
The vectors \(l\) and \(u\) respectively contain the lower and upper \gls{bounds} of the \constraints{}.
Finally, the \variables{} of the linear program are held in the \(x\) vector.
For problems that are in the form of a linear program, there are proven methods to find an \gls{opt-sol}.
@ -739,7 +742,7 @@ For this \variable{} we create two versions of the linear program: a version whe
Both versions are solved to find the best \gls{sol}.
The process is repeated recursively until an integer \gls{sol} is found.
Much of the power of this solving method comes from bounds that are inferred during the process.
Much of the power of this solving method comes from \gls{bounds} that are inferred during the process.
The \gls{sol} to the linear program provides an upper bound for the solution in the current step of the solving process.
Similarly, any integer \gls{sol} found in an earlier branch of the search process provides a lower bound.
When the upper bound given by the linear program is lower that the lower bound from an earlier solution, then we know that any integer \gls{sol} following from the linear program is strictly worse than the incumbent.
@ -1575,9 +1578,9 @@ Adding \gls{propagation} during \gls{rewriting} means that the system becomes no
If \mzninline{b} takes the value 1, \mzninline{ub(x)*(1-b)} is equal to 0, enforcing the \constraint{} \mzninline{x <= 0}.
\end{example}
For \gls{mip} solvers, it is quite important to enforce tight bounds in order to improve efficiency and sometimes even numerical stability.
It would therefore be useful to rewrite the \mzninline{lq_zero_if_b} predicate only after the \domains{} of the involved \variables{} have been reduced as much as possible, in order to take advantage of the tightest possible bounds.
On the other hand, evaluating a predicate may also impose new bounds on \variables{}, so it is not always clear which order of evaluation is best.
For \gls{mip} solvers, it is quite important to enforce tight \gls{bounds} in order to improve efficiency and sometimes even numerical stability.
It would therefore be useful to rewrite the \mzninline{lq_zero_if_b} predicate only after the \domains{} of the involved \variables{} have been reduced as much as possible, in order to take advantage of the tightest possible \gls{bounds}.
On the other hand, evaluating a predicate may also impose new \gls{bounds} on \variables{}, so it is not always clear which order of evaluation is best.
The same problem occurs with \glspl{reif} that are produced during \gls{rewriting}.
Other \constraints{} could fix the \domain{} of the reified \variable{} and make the \gls{reif} unnecessary.

View File

@ -602,7 +602,7 @@ Since a \nanozinc{} program is in fact quite similar to the internal representat
When using \gls{propagation} for \nanozinc{} simplification, we have to carefully consider its effects.
For instance, given the \constraint{} \mzninline{x > y}, with initial \domains{} \mzninline{x in 1..10, y in 1..10}, \gls{propagation} would result in the \domains{} being tightened to \mzninline{x in 2..10, y in 1..9}.
Note, however, that this may now prevent us from removing \mzninline{x} or \mzninline{y}: even if they later become unused, the tightened \domains{} may impose a \constraint{} on their \variables{}.
When the \domain{} of a \variable{} is tighter than the bounds given by its defining expression, the \domain{} are said to be \gls{binding}.
When the \domain{} of a \variable{} is tighter than the \gls{bounds} given by its defining expression, the \domain{} are said to be \gls{binding}.
For instance, if \mzninline{x} is defined as \mzninline{abs(z)}, then any restriction on the \domain{} of \mzninline{x} constrains the possible values of \mzninline{z} through the \mzninline{abs} function.
We therefore need to track whether the \domain{} of a \variable{} is the result of external \constraints{}, or is the consequence of its own definition.

View File

@ -1030,7 +1030,7 @@ The results are grouped based on their size of the instance.
For each group we show the number of instances solved by the configuration and the average time used for this process.
In our first configuration the half-reified \mzninline{all_different} \constraint{} is enforced using a \gls{propagator}.
This \gls{propagator} is an adjusted version from the existing \gls{bounds-con} \mzninline{all_different} \gls{propagator} in \gls{chuffed}.
This \gls{propagator} is an adjusted version from the existing \gls{boundsz-con} \mzninline{all_different} \gls{propagator} in \gls{chuffed}.
The implementation of the \gls{propagator} was already split into parts that check the violation of the \constraint{} and parts that prune the \glspl{domain} of the \variables{}.
Therefore, the transformation described in \cref{sec:half-propagation} can be directly applied.
Since \gls{chuffed} is a \gls{lcg} \solver{}, the explanations created by the \gls{propagator} have to be adjusted as well.

View File

@ -183,7 +183,7 @@ This will activate a different \gls{neighbourhood} for each subsequent \gls{rest
%\paragraph{Adaptive \gls{lns}}
For adaptive \gls{lns}, a simple strategy is to change the size of the \gls{neighbourhood} depending on whether the previous size was successful or not.
\Cref{lst:inc-adaptive} shows an adaptive version of the \mzninline{uniform_neighbourhood} that increases the number of free \variables{} when the previous \gls{restart} failed, and decreases it when it succeeded, within the bounds \([0.6,0.95]\).
\Cref{lst:inc-adaptive} shows an adaptive version of the \mzninline{uniform_neighbourhood} that increases the number of free \variables{} when the previous \gls{restart} failed, and decreases it when it succeeded, within the range \([0.6,0.95]\).
\begin{listing}
\mznfile{assets/listing/inc_adaptive.mzn}