DisChoco offers a library of generators for distributed constraint satisfaction/optimization problems e.g., random binary DisCSPs using model B, random binary DisCSPs with complex local [r]

160 En savoir plus

More particularly, the purpose of this paper is to propose improving techniques **for** tree search. The method that we privileged here is discrepancy search, an alternative to depth first search (the principles **and** references are given in the next section on the scientific background). We then propose to analyze the causes of failures in the search tree **and** derive variables weighting **for** **ordering** **heuristics**. In the first part of the paper, we use these techniques **for** **constraint** **satisfaction** **problems**, in particular randomly generated CSPs **and** car-sequencing instances. A variant, named as YIELDS, of the seminal limited discrepancy search (LDS) method serves as a support **for** developing the search tree. In the second part, the techniques are adapted **for** handling combinatorial optimization **problems**. A climbing discrepancy search (CDS) variant with weighted factors is proposed **for** jobshop scheduling with time-lags. We selected this particular scheduling problem **for** its intrinsic genericity, as well as its practical relevance in the process industry.

En savoir plus
DisChoco offers a library of generators for distributed constraint satisfaction/optimization problems e.g., random binary DisCSPs using model B, random binary DisCSPs with complex local [r]

161 En savoir plus

In GEM-MP, we first re-parameterized the factor graph in such a way that the infer- ence task (that could be performed by LBP inference on the original factor graph) is equivalent to a variational EM procedure. Then, we take advantage of the fact that LBP **and** variational EM can be viewed in terms of different types of free energy mini- mization equations. We formulate our Message-Passing structure as the E **and** M steps of a variational EM procedure (Beal **and** Ghahramani, 2003; Neal **and** Hinton, 1999). This variational formulation leads to the synthesis of new rules that update marginals by maximizing a lower bound of the model evidence such that we never overshoot the model evidence (Answering research question RQ1 ). In addition, in the corresponding Expectation step of GEM-MP, the constructed expected log marginal-likelihood has been defined according to the posterior distribution over local entries of the logical clauses that define factors. This enables us to exploit their logical structures by ap- plying a generalized arc-consistency concept (Rossi et al., 2006), **and** to use that to perform a variational mean-field approximation when updating the marginals. This significantly amends smoothing out the marginals to converge correctly to a stable con- vergent fixed point in the presence of determinism (Answering research question RQ2 ). Our experiments on real-world **problems** demonstrate the increased accuracy **and** con- vergence of GEM-MP compared to existing state-of-the-art inference **algorithms** such as MC-SAT, LBP, **and** Gibbs sampling, **and** convergent message passing **algorithms** such as the Concave-Convex Procedure (CCCP), Residual BP, **and** the L2-Convex method. • Preference Relaxation (PR), a new two-stage strategy that uses the deter- minism (i.e., hard constraints) present in the underlying model to improve the scalability of relational inference.

En savoir plus
191 En savoir plus

AC2001 & AC3.1
Both AC4 **and** AC6 are fine-grained **algorithms**. The disadvantage of these **algorithms** is that the value-oriented propagation queue is expensive to maintain. Therefore, Bessière **and** Régin [ 2001 ] proposed a coarse-grain algorithm AC-2001 which keeps the optimal time complexity of AC6, by memorizing only the current support **for** each value in order to avoid redundant **constraint** checks. This idea is also used in [ Zhang , 2001 ] in an algorithm named AC3.1. AC2001 uses a pointer to store the first support **for** every value on each **constraint**. On the one hand, this data structure is easier to implement **and** maintain than the lists of supported values used in AC6. On the other hand, similarly to the lists of supported values used in AC6, it allows AC2001 to stop the search **for** supports as soon as possible. The search **for** a new support **for** a value on a **constraint** does not check again values before the current support which were previously proved as incompatible with the considered value. In spite of the fact that AC2011 has the same asymptotic time **and** space complexity as AC6, it can provide speed-ups in practical experiments because of its simplicity.

En savoir plus
144 En savoir plus

Some **problems** cannot be solved by any algorithm. It is the case **for** instance of the Halting Problem (Turing, 1936). Other **problems** can be solved by some algorithm, but the time required to do so would be so long, millions of years **for** the current best computers, that it is often not worth the effort. One such example is the problem of whether or not two different regular expressions represent the same language (Meyer & Stockmeyer, 1972). Fortunately, a lot of **problems**, including most **problems** which arise in everyday life, can be solved in a relatively affordable time. They are in NP, the set of **problems** solvable in polynomial time by a non-deterministic Turing machine. Most work in computer science, or at least most work with the intent of solving **problems**, focuses on **problems** from NP. If P6=NP, then NP-Complete **problems**, the hardest to solve **problems** in NP, are not solvable in polynomial time, **and** may require **algorithms** with an exponential running time to be solved completely. However, by focusing on a single NP-Complete problem, one can still manage to find polynomial time **algorithms** which give interesting results. This can be done by only focusing on a subset of all possible instances of this problem. If successful, this approach leads to a tractable class. This can also be done by approximating, the act of finding a solution to the problem which is non-optimal, but close enough to be useful. One such work is (Raghavendra, 2008). Additionally, one can use selected randomized **algorithms**. Such **algorithms** can give an optimal solution in a polynomial time **for** any instance, but with a probability of only 1-, with very small (Motwani & Raghavan, 1995).

En savoir plus
130 En savoir plus

characterizing each CSP, **and** the selection **algorithms** used **for** deciding the solver(s) to run on a given CSP.
A. Dataset, Solvers **and** Features
In order to build **and** test a good portfolio approach it is fundamental to gather an adequate dataset of CSPs. The data sample should capture a significant variety of **problems** encoded in the same language. Although nowadays the CP community has not yet agreed on a standard modelling language, MiniZinc [33] is probably the most used **and** supported language to model CP **problems**. However, the biggest existing dataset of CSPs we aware is the one used in the 2008 International **Constraint** Solver Competition (ICSC) [49]. These instances are encoded in the XML-based language XCSP [41]. In [2] an empirical evaluation on such a dataset was conducted. Here we take a step forward by exploiting the xcsp2mzn [3] compiler we developed **for** converting XCSP to MiniZinc. This allowed us to use a bigger benchmark of 8600 CSPs: 6944 instances of ICSC converted by xcsp2mzn, **and** 1656 native MiniZinc instances coming from the MiniZinc 1.6 benchmarks **and** the MiniZinc Challenge 2012.

En savoir plus
original goal of explaining the performance of CSP **and** SAT solvers, **and** have attracted significant attention in the following years [125][56][72].
When approaching strong backdoors, one must make a clear distinction be- tween the solving phase, where the backdoor is known, **and** the computation of the backdoor. If a nontrivial backdoor B is already known then regardless of its size it is a useful information **for** a **constraint** solver, which can invoke a dedicated algorithm whenever the backtracking search procedure has assigned every variable in B (we omit small additional technical requirements here; variables outside B may have been assigned as well **and** the instance may have been reduced by aux- iliary inference, such as consistency **algorithms**). In fact, if the backdoor size k is quite large this approach is likely to be more efficient in practice than a straight de- composition of the instance into |D| k subproblems. **For** example, if a CSP instance has 150 variables a backdoor of size 40 provides an uncompetitive decomposition but can still allow a general backtracking algorithm to prune a large number of branches, even if the backdoor is not used to guide the branching heuristic. As a direct consequence, improving the worst-case complexity of the solving phase is important but not critical **for** the usefulness of the framework.

En savoir plus
138 En savoir plus

a **constraint** graph in the form of a tree provides no more information than we would obtain by applying arc consistency to the original instance I.
Many tractable classes of CSP are automatically solved in polynomial time by any algorithm which maintains (generalised) arc consistency during search: we can notably cite the class of instances with max-closed constraints [113], the class of instances whose constraints are max-closed after independent permutations of each domain [93] **and** the class of binary instances satisfying the broken-triangle property [58]. Simi- larly, Valued CSPs with submodular constraints are automatically solved by establish- ing OSAC (Optimal Soft Arc Consistency) [57]. Present-day solvers do not explicitly look **for** tractable classes, but by analysis of the **algorithms** they use it is sometimes possible to show that they automatically solve certain tractable classes. **For** instance, translating CSP instances with max-closed constraints [113] or CSP instances with connected row-convex constraints [75] into SAT instances using the order encoding produces instances that fall into known tractable classes of SAT which are solved ef- ficiently by modern clause-learning SAT-solvers [145,109]. Tractable classes that are automatically solved by standard **algorithms** are nevertheless useful since proving that the solver will always execute in polynomial time in a given application provides a potentially important guarantee of efficiency.

En savoir plus
All three **algorithms** that we present in this work follow the same broad outline, while the details are different in each case. To produce an assignment that beats a random assignment, the idea is to partition the variables in to two sets (F, G) with F standing **for** ‘Fixed’ **and** G standing **for** ‘Greedy’ (in Section 4, these correspond to [n] \ U **and** U respectively). The variables in F are assigned independent **and** uniform random bits **and** the variables in G are assigned values greedily based on the values already assigned to F . We will refer to constraints with exactly one variable from G as active constraints. The design of the greedy assignments **and** their analysis is driven by two key objectives.

En savoir plus
Many tractable classes of CSP are automatically solved in polynomial time by any algorithm which maintains (generalised) arc consistency during search: we can notably cite the class of instances with max-closed constraints [113], the class of instances whose constraints are max-closed after independent permutations of each domain [93] **and** the class of binary instances satisfying the broken-triangle property [58]. Simi- larly, Valued CSPs with submodular constraints are automatically solved by establish- ing OSAC (Optimal Soft Arc Consistency) [57]. Present-day solvers do not explicitly look **for** tractable classes, but by analysis of the **algorithms** they use it is sometimes possible to show that they automatically solve certain tractable classes. **For** instance, translating CSP instances with max-closed constraints [113] or CSP instances with connected row-convex constraints [75] into SAT instances using the order encoding produces instances that fall into known tractable classes of SAT which are solved ef- ficiently by modern clause-learning SAT-solvers [145,109]. Tractable classes that are automatically solved by standard **algorithms** are nevertheless useful since proving that the solver will always execute in polynomial time in a given application provides a potentially important guarantee of efficiency.

En savoir plus
7.2 Limits **and** Constraints
It would be strange to say that our improvements of counting **algorithms** in Section 3 have limitations. We proposed better **algorithms** in both theory **and** practice; they have better asymptotic complexities **and** are faster in our benchmarks when choosing hard instances. One could argue that our approach **for** avoiding systematic recomputation has the limitation that it has lower precision, but we see it more as a trade off between speed **and** precision. In my opinion, limitations arise when considering practical matters, which comes from the choice of a paradigm — counting-based search — **and** its actual implementation in Gecode. While generic **and** powerful, using counting-based search indeed limits the constraints we can use **for** modeling in practice. In other words, the **problems** we can solve are limited by the constraints we support (using CBS with some uninstrumented constraints is possible, but it is less effective because we miss solution densities). Furthermore, supporting a new **constraint** is difficult: we have to design a new counting algorithm **and** provide an efficient implementation. As CP solvers support a lot of different constraints, designing counting **algorithms** is a long-term project. This is a price we must pay **for** having a family of generic branching **heuristics** that performs well **and** adapts to the problem formulation.

En savoir plus
One important class of valued **constraint** languages are the submodular languages [ 45 ]. It is known that VCSP instances where all constraints are submodular can be solved in polynomial time, although the **algorithms** that have been proposed to achieve this in the general case are rather intricate **and** difficult to implement [ 27 , 44 ]. In the special case of binary submodular constraints a much simpler algorithm can be used to find a minimising assignment of values, based on a standard max-flow algorithm [ 15 ]. Our results in this paper show that this simpler algorithm can be used to obtain exact solutions to arbitrary VCSP instances with submodular constraints (from a finite language) in polynomial time.

En savoir plus
One important class of valued **constraint** languages are the submodular languages [ 45 ]. It is known that VCSP instances where all constraints are submodular can be solved in polynomial time, although the **algorithms** that have been proposed to achieve this in the general case are rather intricate **and** difficult to implement [ 27 , 44 ]. In the special case of binary submodular constraints a much simpler algorithm can be used to find a minimising assignment of values, based on a standard max-flow algorithm [ 15 ]. Our results in this paper show that this simpler algorithm can be used to obtain exact solutions to arbitrary VCSP instances with submodular constraints (from a finite language) in polynomial time.

En savoir plus
7 Conclusion
We have analysed the **problems** that arise in applications that require the interactive resolution of a **constraint** problem by a human user. The central notion is global inverse consistency of the network because it ensures that the person who interactively solves the problem is not given the choice to select values that do not lead to solutions. We have shown that deciding, computing, or restoring global inverse consistency, **and** other related **problems** are all NP-hard. We have proposed several **algorithms** **for** enforcing global inverse consistency **and** we have shown that the best version is efficient enough to be used in an interactive setting on several configuration **and** design **problems**. This is a great advantage compared to existing techniques usually used in configurators. As opposed to techniques maintaining arc consistency, our **algorithms** give an exact picture of the values remaining feasible. As opposed to compiling offline the problem as a multi-valued decision diagram, our **algorithms** can deal with **constraint** networks that change over time (e.g., an extra non-unary **constraint** posted by a customer who does not want to buy a car with more than 100,000 miles except if it is a Volvo). We have finally extended our contribution to the inverse consistency of tuples, which is useful at the modelling phase of configuration **problems**.

En savoir plus
Univ de Toulouse, LAAS, F-31400 Toulouse, France
Abstract
In the car-sequencing problem, a number of cars has to be sequenced on an assembly line respecting several constraints. This problem was addressed by both Operations Research (OR) **and** **Constraint** Programming (CP) com- munities, either as a decision problem or as an optimization problem. In this paper, we consider the decision variant of the car sequencing problem **and** we propose a systematic way to classify **heuristics** **for** solving it. This classification is based on a set of four criteria, **and** we consider all relevant combinations **for** these criteria. Some combinations correspond to common **heuristics** used in the past, whereas many others are novel. Not surprisingly, our empirical evaluation confirms earlier findings that specific **heuristics** are very important **for** efficiently solving the car-sequencing problem (see **for** in- stance [17]), in fact, often as important or more than the propagation method. Moreover, through a criteria analysis, we are able to get several new insights into what makes a good heuristic **for** this problem. In particular, we show that the criterion used to select the most constrained option is critical, **and** the best choice is fairly reliably the “load” of an option. Similarly, branching on the type of vehicle is more efficient than branching on the use of an option. Overall, we can therefore indicate with a relatively high confidence which is the most robust strategy, or at least outline a small set of potentially best strategies.

En savoir plus
2 Solution quality evaluation
When **satisfaction** **problems** are considered, the definition **and** the evaluation of a portfolio solver is straightforward. Indeed, the outcome of a solver run **for** a given time on a given instance can be either ’solved’ (i.e., a solution is found or the unsatisfiability is proven) or ’not solved’ (i.e., the solver does not say anything about the problem). Building **and** evaluating a CSP solver is then conceptually easy: the goal is to maximize the number of solved instances, solving them as fast as possible. Unfortunately, in the COP world the dichotomy solved/not solved is no longer suitable. A COP solver in fact can provide sub-optimal solutions or even give the optimal one without being able to prove its optimality. Moreover, in order to speed up the search COP solvers could be executed in a non-independent way. Indeed, the knowledge of a sub-optimal solution can be used by a solver to further prune its search space, **and** therefore to speed up the search process. Thus, the independent (even parallel) execution of a sequence of solvers may differ from a “cooperative” execution where the best solution found by a given solver is used as a lower bound by the solvers that are launched afterwards.

En savoir plus
Other approaches try to use qualitative reasoning systems instead of the low level detailed representation to capture the functional behavior of design [For88]. Such[r]

147 En savoir plus

4. CSP Resolution Theories
Before we try to capture CSP Resolution Theories in a logical formalism, we must establish a clear distinction between a logical theory of the CSP itself (as it has been formulated in chapter 3, with no reference to candidates) **and** theories related to the resolution methods (which we consider from now on as being based on the progressive elimination of candidates). These two kinds of theories correspond to two options: are we just interested in formulating a set of axioms describing the constraints a solution of a given CSP instance (if it has any) must satisfy or do we want a theory that somehow applies to intermediate states in the resolution process? To maintain this distinction as clearly as possible, we shall consistently use the expressions “CSP Theory” **for** the first type **and** “CSP Resolution Theory” **for** the second type. Section 4.1 elaborates on this distinction. Since it has been shown in chapter 3 that formulating the first theory is straightforward, theories of the second kind will remain as our main topic of interest in the present book. Nevertheless, it will be necessary to clarify the relationship between the two types of theories **and** between their respective basic notions (“value” **and** “candidate”).

En savoir plus
487 En savoir plus

For nonbipartite matching, we obtain a simple, purely algebraic algorithm with running time Onω where n is the number of vertices and ω is the matrix multiplication exponent.. This reso[r]