• Aucun résultat trouvé

A 3/4 factor algorithm

Dans le document Springer-Verlag Berlin Heidelberg GmbH (Page 148-152)

16 Maximum Satisfiability

16.4 A 3/4 factor algorithm

where OPT f is the value of an optimal solution to LP (16.2). Clearly, OPT 1 ~ OPT. This algorithm can also be derandomized using the method of conditional expectation (Exercise 16.3). Hence, for MAX-SAT instances with clause sizes at most k, it is a

fA

factor approximation algorithm. Since

( 1)k 1

vk E z+: 1-

k <

this is a 1- 1/e factor algorithm for MAX-SAT.

16.4 A 3/4 factor algorithm

We will combine the two algorithms as follows. Let b be the flip of a fair coin.

If b

=

0, run the first randomized algorithm, and if b

=

1, run the second randomized algorithm.

Remark 16.6 Notice that we are effectively setting Xi to True with proba-bility ~

+

~yi; however, the xi's are not set independently!

Let (y*, z*) be an optimal solution to LP (16.2) on the given instance.

Lemma 16.7

E[Wc] ~ ~WcZ~.

Proof: Let size( c)= k. By Lemma 16.2,

where we have used the fact that z~ ~ 1. By Lemma 16.5,

Combining we get

Now,

0:1 +.81 = 0:2 +.82 =

3/2, and fork ~ 3,

o:k +.Bk

~ 7

/8+

(1-1/e) ~ 3/2.

The lemma follows. D

By linearity of expectation,

(16.3)

where OPTt is the optimal solution to LP (16.2). Finally, consider the fol-lowing deterministic algorithm.

136 16 Maximum Satisfiability

Algorithm 16.8 (MAX-SAT- factor 3/4)

1. Use the derandomized factor 1/2 algorithm to get a truth assignment,

71.

2. Use the derandomized factor 1- 1/e algorithm to get a truth assignment, 72.

3. Output the better of the two assignments.

Theorem 16.9 Algorithm 16.8 is a deterministic factor 3/4 approximation algorithm for MAX-SAT.

Proof: By Lemma 16.7, the average of the weights of satisfied clauses under

7 1 and 7 2 is ~ ~OPT. Hence the better of these two assignments also does

at least as well. 0

By (16.3), E[W] ~~OPT!. The weight of the integral solution produced by Algorithm 16.8 is at least E[W]. Therefore, the integrality gap ofLP (16.2) is ~ 3/4. Below we show that this is tight.

Example 16.10 Consider the SAT formula

f

= (x1 V x2) 1\ (x1 V x2) 1\ (x1 V

x

2 ) 1\

(x

1 V

x

2 ), where each clause is of unit weight. It is easy to see that setting Yi = 1/2 and Zc = 1 for all i and cis an optimal solution to LP (16.2) for any instance having size 2 clauses. Therefore OPT f = 4. On the other hand OPT= 3, and thus for this instance LP (16.2) has a integrality gap of

3/4. D

Example 16.11 Let us provide a tight example to Algorithm 16.8. Let

f

= (x V y) 1\ (x V Y) 1\ (x V z), and let the weights of these three clauses be 1, 1, and 2

+

c, respectively. By the remark made in Example 16.10, on this instance the factor 1 -1/e algorithm will set each variable to True with probability 1/2 and so will be the same as the factor 1/2 algorithm. During derandomization, suppose variable x is set first. The conditional expectations are E[W

I

x =True] = 3 + c/2 and E[W

I

x =False] = 3 +c. Thus, x will be set to False. But this leads to a total weight of 3

+

c, whereas by setting x to True we can get a weight of 4 +c. Clearly, we can get an infinite family of such examples by replicating these 3 clauses with new variables. 0

16.5 Exercises

16.1 The algorithm of Section 16.1 achieves an approximation guarantee of elk if all clauses in the given instance have size at least k. Give a tight example of factor elk for this algorithm.

16.5 Exercises 137 16.2 Show that the following is a factor 1/2 algorithm for MAX-SAT. Let r be an arbitrary truth assignment and r' be its complement, i.e., a variable is True in r iff it is False in r'. Compute the weight of clauses satisfied by r and

r',

then output the better assignment.

16.3 Use the method of conditional expectation to derandomize the 1-1/e factor algorithm for MAX-SAT.

16.4 Observe that the randomization used in the 3/4 factor algorithm does not set Boolean variables independently of each other. As remarked in Sec-tion 16.2, the algorithm can still, in principle, be derandomized using the method of conditional expectation. Devise a way of doing so. Observe that the algorithm obtained is different from Algorithm 16.8.

16.5 (Goemans and Williamson [110]) Instead of using the solution to LP (16.2),

y:,

as probability of setting xi to True, consider the more general scheme of using g(y:), for a suitable function g. Can this lead to an improve-ment over the factor 1- 1/e algorithm?

16.6 Consider the following randomized algorithm for the maximum cut problem, defined in Exercise 2.1. After the initialization step of Algorithm 2.13, each of the remaining vertices is equally likely to go in sets A or B.

Show that the expected size of the cut found is at least OPT /2. Show that the derandomization of this algorithm via the method of conditional expectation is precisely Algorithm 2.13.

16.7 Consider the following generalization of the maximum cut problem.

Problem 16.12 (Linear equations over GF[2]) Given m equations over n GF[2] variables, find an assignment for the variables that maximizes the number of satisfied equations.

1. Show that if m ~ n, this problem is polynomial time solvable.

2. In general, the problem is NP-hard. Give a factor 1/2 randomized algo-rithm for it, and derandomize using the method of conditional expecta-tion.

16.8 Consider the obvious randomized algorithm for the MAX k-CUT prob-lem, Problem 2.14 in Exercise 2.3, which assigns each vertex randomly to one of the sets 81, ... , Sk. Show that the expected number of edges running be-tween these sets is at least OPT /2. Show that the derandomization of this algorithm, via the method of conditional expectation, gives the greedy algo-rithm sought in Exercise 2.3.

16.9 Repeat Exercise 16.8 for the maximum directed cut problem, Problem 2.15 in Exercise 2.4, i.e., give a factor 1/4 randomized algorithm, and show that its derandomization gives a greedy algorithm.

138 16 Maximum Satisfiability

16.6 Notes

The factor 1/2 algorithm, which was also the first approximation algorithm for MAX-SAT, is due to Johnson [157]. The first factor 3/4 algorithm was due to Yannakakis (270]. The (simpler) algorithm given here is due to Goemans and Williamson (110]. The method of conditional expectation is implicit in Erdos and Selfridge (80]. Its use for obtaining polynomial time algorithms was pointed out by Spencer [251] (see Raghavan (233] and Alon and Spencer [7] for enhancements to this technique).

Dans le document Springer-Verlag Berlin Heidelberg GmbH (Page 148-152)