• Aucun résultat trouvé

New approaches to multi-objective optimization

N/A
N/A
Protected

Academic year: 2021

Partager "New approaches to multi-objective optimization"

Copied!
30
0
0

Texte intégral

(1)Math. Program., Ser. A (2014) 146:525–554 DOI 10.1007/s10107-013-0703-7 FULL LENGTH PAPER. New approaches to multi-objective optimization Fabrizio Grandoni · R. Ravi · Mohit Singh · Rico Zenklusen. Received: 30 March 2012 / Accepted: 20 July 2013 / Published online: 8 August 2013 © Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2013. Abstract A natural way to deal with multiple, partially conflicting objectives is turning all the objectives but one into budget constraints. Many classical optimization problems, such as maximum spanning tree and forest, shortest path, maximum weight (perfect) matching, maximum weight independent set (basis) in a matroid or in the intersection of two matroids, become NP-hard even with one budget constraint. Still, for most of these problems efficient deterministic and randomized approximation schemes are known. Not much is known however about the case of two or more budgets: filling this gap, at least partially, is the main goal of this paper. In more detail, we obtain the following main results: Using iterative rounding for the first time in multi-objective optimization, we obtain multi-criteria PTASs (which slightly violate the budget constraints) for spanning tree, matroid basis, and bipartite matching with k = O(1) budget constraints. We present a simple mechanism to transform multicriteria approximation schemes into pure approximation schemes for problems whose. A preliminary version of many results in this paper appeared in ESA’09 [17] and ESA’10 [18]. F. Grandoni IDSIA, University of Italian Switzerland, Manno, Switzerland e-mail: fabrizio@idsia.ch R. Ravi Tepper School of Business, Carnegie Mellon University, Pittsburgh, USA e-mail: ravi@cmu.edu M. Singh School of Computer Science, McGill University, Montreal, Canada e-mail: mohit@cs.mcgill.edu R. Zenklusen (B) Institute for Operations Research (IFOR), Department of Mathematics, ETH, Zurich, Switzerland e-mail: rico.zenklusen@ifor.math.ethz.ch. 123.

(2) 526. F. Grandoni et al.. feasible solutions define an independence system. This gives improved algorithms for several problems. In particular, this mechanism can be applied to the above bipartite matching algorithm, hence obtaining a pure PTAS. We show that points in lowdimensional faces of any matroid polytope are almost integral, an interesting result on its own. This gives a deterministic approximation scheme for k-budgeted matroid independent set. We present a deterministic approximation scheme for k-budgeted matching (in general graphs), where k = O(1). Interestingly, to show that our procedure works, we rely on a non-constructive result by Stromquist and Woodall, which is based on the Ham Sandwich Theorem. Keywords Multi-objective optimization · Multi-budgeted optimization · Approximation algorithms · Combinatorial optimization Mathematics Subject Classification. 90C29 · 90C27. 1 Introduction In many applications, one has to compromise between several, partially conflicting goals. Multi-Objective Optimization is a broad area of study in Operations Research, Economics and Computer Science (see [17,36] and references therein). A variety of approaches have been employed to formulate such problems. Here we adopt the Multi-Budgeted Optimization approach [36]: we cast one of the goals as the objective function, and the others as budget constraints. More precisely, we are given a (finite) set F of solutions for the problem, where each solution is a subset S of elements from a given universe E (e.g., the edges of a graph). We are also given a weight function 1 ≤ i ≤ k, w : E → Q+ and a set of k =O(1)1 length functions i : E → Q+ ,  i i that assign a weight w(S) := e∈S w(e) and an ith-length  (S) := e∈S  (e), i 1 ≤ i ≤ k, to every candidate solution S. For each length function  , there is a budget Li ∈ Q+ . The k-budgeted optimization problem can then be formulated as follows: minimize/maximize w(S) subject to S ∈ F, i (S) ≤ Li , 1 ≤ i ≤ k. We next use OPT to denote an optimum solution. A multi-criteria (α0 , α1 , . . . , αk )approximation algorithm, αi ≥ 1, is a polynomial-time algorithm which produces an α0 approximate solution S such that i (S) ≤ αi Li for all 1 ≤ i ≤ k. In particular, w(S) ≥ w(OPT )/α0 for a maximization problem, and w(S) ≤ α0 w(OPT ) for a minimization one. In a polynomial time approximation scheme (PTAS), α0 = 1 + ε for any given constant ε > 0, and all the other αi ’s are 1. In a multi-criteria PTAS, all the αi ’s are at most 1 + ε. Hence, a multi-criteria PTAS might return slightly infeasible solutions. We sometimes call pure a standard PTAS, in order to stress its difference from a multi-criteria PTAS. Following the literature on the topic, we will focus on the set of problems below: 1 The assumption that k is a constant is crucial in this paper, since many of the presented algorithms will. have a running time that is exponential in k, but polynomial for constant k.. 123.

(3) New approaches to multi-objective optimization. 527. • k-budgeted (perfect) matching: F is given by the (perfect) matchings of an undirected graph G = (V, E). • k-budgeted spanning tree (forest): F is given by the spanning trees (forests) of G. • k-budgeted shortest path: F is given by the paths connecting two given nodes s and t in G. • k-budgeted matroid independent set (basis): F is given by the independent sets (bases) of a matroid M = (E, I).2 • k-budgeted matroid intersection independent set (basis): F is given by the independent sets (bases) in the intersection of two matroids M1 = (E, I1 ) and M2 = (E, I2 ). We will consider the minimization version of k-budgeted shortest path. For all the other problems, the minimization version is either trivial or equivalent to its maximization counterpart. Therefore, we will focus only on the maximization version of those problems. All the above problems are polynomial-time solvable (see, e.g., [23]) in their unbudgeted version (k = 0), but become NP-hard [1,6] even for a single budget constraint (k = 1). For the case of one budget (k = 1), PTASs are known for spanning tree [35] (see also [21]), shortest path [42] (see also [20,27]), and matching [6]. The approach in [35] easily generalizes to the case of matroid basis. A PTAS is also known for matroid intersection independent set [6]. In the case 2 ≤ k = O(1), one can use a very general construction by Papadimitriou and Yannakakis [32]. Their technique is based on the construction of ε-approximate Pareto curves, and it can be applied to all the problems whose exact version admits a pseudo-polynomial-time (PPT) deterministic (resp., Monte-Carlo) algorithm. We recall that the exact version of a given optimization problem asks for a feasible solution of exactly a given target weight. This leads to multi-criteria deterministic (resp., randomized) approximation schemes (with αi = 1 + ε for all i). In particular, one can achieve the mentioned approximation for k-budgeted spanning tree, k-budgeted shortest path, and k-budgeted (perfect) matching. We note that, if one requires feasible solutions, several of the mentioned problems are inapproximable already for two budget constraints (see also Sect. 2). More precisely, the corresponding feasibility problem is NP-complete. In particular, this holds for k-budgeted shortest path, k-budgeted perfect matching and kbudgeted spanning tree (and hence also for k-budgeted matroid basis and k-budgeted matroid intersection basis). Furthermore, for these problems we can exchange the role of the objective function with any one of the budget constraints. We can conclude that in any (polynomial-time) (α0 , α1 , . . . , αk )-approximation algorithm for these problems, at most one αi can be 1. 2 We recall that E is a finite ground set and I ⊆ 2 E is a nonempty family of subsets of E (independent sets) which have to satisfy the following two conditions: (i) I ∈ I, J ⊆ I ⇒ J ∈ I and (ii) I, J ∈ I, |I | > |J | ⇒ ∃z ∈ I \J : J ∪ {z} ∈ I. A basis is a maximal independent set. For all matroids used in this paper we make the usual assumption that independence of a set can be checked in polynomial time. For additional information on matroids, see e.g. [38, Volume B].. 123.

(4) 528. F. Grandoni et al.. We also remark that, for all the other problems, the set of solutions F forms an independence system. In other terms, for S ∈ F and S ⊆ S, we have S ∈ F. Our Results. We obtain the following main results: (1) Using the iterative rounding framework, we obtain simple determinisitc (1, 1 + ε, . . . , 1 + ε)-approximation algorithm for k-Budgeted Spanning Tree and k-Budgeted Matroid Basis. This improves on the (1 + ε, 1 + ε, . . . , 1 + ε)approximation algorithms for the same problems in [32], and it is best possible approximation-wise from the above discussion. Furthermore, we obtain a (more involved) deterministic (1 + ε, 1 + ε, . . . , 1 + ε)-approximation algorithm for kBudgeted Bipartite Matching. In contrast, the approach in [32] achieves the same approximation for general graphs, but the algorithm is Monte-Carlo. The algorithm for k-Budgeted Spanning Tree is rather simple; a vertex solution for the natural LP relaxation of the problem is already sparse: it has about k edges more than a spanning tree in its support due to the well-known laminarity of an independent set of tight spanning tree constraints (see, e.g., [14]). We remove all edges corresponding to variables of value zero, relax (remove) all the budget constraints, and solve optimally the residual problem (which is a standard spanning tree problem). A preliminary guessing phase ensures that the k edges not used in the tree do not add much to the approximation bound for any of the budgets. This approach also gives a very simple proof of the earlier result for the case k = 1 [35]. An identical approach works also for the more general k-Budgeted Matroid Basis problem. Our algorithm for k-Budgeted Bipartite Matching is more involved: after an initial preprocessing phase, where the algorithm removes all edges with large weight and large length, there is a decomposition phase. In that phase, we run an iterative relaxation algorithm which uses the optimal solution of the natural LP formulation to obtain a modified LP solution. The iterative algorithm ensures that the support of the modified solution is a collection of h ≤ k vertex disjoint paths. Moreover, each of these paths has small weight and length. In the final combination phase, we combine the solutions on these paths to return one feasible matching. Each path can be decomposed in two matchings. The algorithm picks one matching from each of the paths. While the algorithm is a brute force enumeration over all choices (which are 2h ≤ 2k many), a probabilistic argument is used to show that there exists a choice of a matching from each path which provides a solution with the desired guarantee. Perhaps even more importantly than these specific results, our main contribution here is to demonstrate that the general framework of iterative rounding can be used to obtain approximation algorithms for various multi-objective optimization problems. (2) We present a simple but powerful mechanism to transform a multi-criteria PTAS into a pure PTAS for problems whose feasible solutions define an independence system. Similarly, a multi-criteria polynomial randomized time approximation scheme (PRAS) can be transformed into a pure PRAS. The basic idea is as follows. We show that a good solution exists even if we scale down the budgets by a small factor. This is done by applying a greedy discarding strategy similar to the greedy algorithm for knapsack. Applying a multi-criteria PTAS (given as a black box) to the scaled problem gives a feasible solution for the original one, of weight close to the optimal weight.. 123.

(5) New approaches to multi-objective optimization. 529. To the best of our knowledge, this simple result was not observed before. Indeed, it implies improved approximation algorithms for a number of problems. In particular, we can combine our mechanism with the construction in [32]. For example, using the PPT-algorithm for exact forest in [5], one obtains a PTAS for k-budgeted forest. Similarly, the Monte-Carlo PPT-algorithm for exact matching in [30] gives a PRAS for k-budgeted matching. The Monte-Carlo PPT-algorithms for exact matroid intersection independent set in [9], which works in the special case of representable matroids,3 implies a PRAS for the corresponding k-budgeted problem. Of course, one can also exploit multi-criteria approximation schemes obtained with different techniques. For example, exploiting the multi-criteria PTAS for k-budgeted bipartite matching that we present in this paper, one obtains a PTAS for the same problem. Very recently [10], a multi-criteria PRAS for k-budgeted matroid independent set, based on dependent randomized rounding, has been presented. This implies a PRAS for k-budgeted matroid independent set. (3) Based on a different, more direct approach, we obtain a PTAS (rather than a PRAS) for k-budgeted matroid independent set. The main insight here is a structural property of faces of the matroid polytope4 which might be of independent interest. Essentially, we show that points in low-dimensional faces of any matroid polytope are almost integral (i.e., they contain few fractional components). More precisely, if the face has dimension d, then at most 2d components are fractional. A PTAS can then easily be derived as follows. We first guess the most expensive elements in the optimum solution, and reduce the problem consequently. Then we compute an optimal (basic) fractional solution: since the relaxation consists of the matroid polytope with k additional linear constraints, the obtained fractional solution lies on a face of the matroid polytope which is at most k-dimensional. Consequently, it has at most 2k fractional components. By rounding down such fractional components, we obtain a feasible integral solution with the desired approximation guarantee. (4) Finally, we present a PTAS for k-budgeted matching (in arbitrary graphs). Our PTAS works as follows. Let us confuse a matching M with the associated incidence vector x M . We initially compute an optimal fractional matching x ∗ to the natural LP relaxation k+1 of k-budgeted matching, and express it as a convex combination x∗ = j=1 α j x j of k + 1 (or less) matchings x 1 , . . . , x k+1 . Then we exploit a merging procedure which, given two matchings x and x. with a parameter α ∈ [0, 1], computes a matching y which is not longer than z := αx +(1−α)x. with respect to all k lengths, and has comparable weight. This procedure is applied successively: first on the matchings x1 and x2 with parameter α = α1 /(α1 + α2 ), hence getting a matching y . Then, on the two matchings y and x3 with parameter α = (α1 +α2 )/(α1 +α2 +α3 ), and so on. The resulting matching is feasible and almost optimal, when performing a preliminary guessing step before applying the patching procedure.. 3 A matroid M = (E, I) is representable if its ground set E can be mapped in a bijective way to the columns of a matrix over some field, and I ⊆ E is independent in M iff the corresponding columns are linearly independent. 4 For some given matroid M = (E, I), the corresponding matroid polytope P is the convex hull of the I. incidence vectors of all independent sets.. 123.

(6) 530. F. Grandoni et al.. An interesting aspect of our procedure is that it relies on a non-constructive theorem of Stromquist and Woodall, which in turn relies on the Ham Sandwich Theorem. The theorem of Stromquist and Woodall implies that some structure exists, which guarantees that the merging procedure we present works. Related work. There are a few general tools for designing approximation algorithms for budgeted problems. One basic approach is combining dynamic programming (which solves the problem for polynomial weights and lengths) with rounding and scaling techniques (to reduce the problem to the case of polynomial quantities). This leads for example to the FPTAS for 1- budgeted shortest path [20,27,42]. Another fundamental technique is the Lagrangian relaxation method. The basic idea is relaxing the budget constraints, and lifting them into the objective function, where they are weighted by Lagrangian multipliers. Solving the relaxed problem, one obtains two or more solutions with optimal Lagrangian weight, which can—if needed—be patched together to get a good solution for the original problem. Demonstrating this method, Goemans and Ravi [35] gave a PTAS for 1- budgeted spanning tree, which also extends to 1- budgeted matroid basis. Inspired by this approach, Correa and Levin [12] presented algorithms for special classes of polynomial-time covering problems with an additional covering constraint. Using the same approach as Goemans and Ravi, with an involved patching step, Berger, Bonifaci, Grandoni, and Schäfer [6] obtained a PTAS for 1- budgeted matching and 1- budgeted matroid intersection independent set. Their approach does not seem to generalize to the case of multiple budget constraints. The techniques above apply to the case of one budget. Not much is known for problems with two or more budgets. However, often multi-criteria approximation schemes are known, which provide a (1+ε)-approximate solution violating the budgets by a factor (1 + ε). First of all, there is a very general technique by Papadimitriou and Yannakakis [32], based on the construction of ε-approximate Pareto curves. Given an optimization problem with multiple objectives, the Pareto curve consists of the set of solutions S such that there is no solution S which is strictly better than S (in a vectorial sense). Papadimitriou and Yannakakis show that, for any constant ε > 0, there always exists a polynomial-size ε-approximate Pareto curve A, i.e., a set of solutions such that every solution in the Pareto curve is within a factor of (1 + ε) from some solution in A on each objective. Furthermore, this approximate curve can be constructed in polynomial time in the size of the input and 1/ε whenever there exists a PPT algorithm for the associated exact problem. This implies multi-criteria FPTASs for k-budgeted spanning tree and k-budgeted shortest path. Furthermore, it implies a multi-criteria FPRAS for k-budgeted (perfect) matching. The latter result exploits the Monte-Carlo PPT algorithm for exact matching in [30]. The iterative rounding technique was introduced by Jain [22] for approximating survivable network design problems. The basic idea in iterative rounding for covering problems is as follows: Consider an optimal (fractional) vertex (or extreme point or basic feasible) solution to a linear programming relaxation to the problem, and show that there is a variable with high fractional value (e.g. at least 0.5) which can be rounded up to an integer without losing too much (e.g. 2) in the approximation. The method includes this rounded variable in the integral solution and iterates on a reduced prob-. 123.

(7) New approaches to multi-objective optimization. 531. lem where the integral variables are fixed. This method can be enhanced by adding a relaxation step, where one relaxes a constraint that can be ignored without losing too much in the feasibility. The iterative relaxation method has been very successful for approximating degree-constrained network design problems [24,25,39,43] and directed network design problems [4]. Recently, using an iterative randomized rounding approach, Byrka et al. [8]developed an improved approximation algorithm for the Steiner tree problem which was further developed in [16]. In the context of these methods, our paper shows that iterative rounding is a powerful and flexible tool also for approximating multi-objective optimization problems All mentioned problems are easy in the unbudgeted version. Given an NP-hard unbudgeted problem which admits a ρ-approximation, the parametric search technique in [28] provides a multi-criteria kρ-approximation algorithm violating each budget by a factor kρ for the corresponding problem with k budgets. This only gives a much weaker k-approximation for each objective for the problems considered here. Other techniques lead to logarithmic approximation factors (see, e.g., [7,33,34]). Subsequent work. After the conference versions of this paper, relevant progress has been made on some of the problems that we consider here. In [11], a randomized rounding approach was suggested which leads to a PRAS for k-budgeted matroid intersection. Furthermore, also in [11], a PTAS for k-budgeted matching was obtained. This algorithm is based on the derandomization of a PRAS which is obtained by applying Chernoff bounds to a randomized rounding procedure which iteratively merges pairs of matchings along similar lines as we do here. To obtain sufficient concentration, the symmetric difference of two matchings to merge is cut into (k log k/ 2 ) pieces, and within each of theses pieces the edges of one of the two matchings are kept, which 2 is decided randomly. When derandomizing the procedure, all possible 2(k log k/ ) random outcomes have to be checked in each merge iteration. An advantage of the algorithm for k-budgeted matching that we present here, is that the running time of our algorithm does not depend on , apart of the initial guessing step. This is particularly of interest for instances where the weight and lengths of each edge is sufficiently small such that the guessing step can be simplified or even skipped. Organization. The rest of this paper is organized as follows. In Sect. 2 we discuss the approximability of part of the mentioned problems. In Sect. 3 we present our multicriteria approximation schemes for k-Budgeted Spanning Tree, k-Budgeted Matroid Basis and k-Budgeted Bipartite Matching. Section 4 contains our pure approximation schemes. In particular, we describe our feasibilization mechanism, give a PTAS for k-Budgeted Matroid Independent Set, and a PTAS for k-Budgeted Matching.. 2 A simple hardness result As a warm-up for the reader, we start by observing a few simple facts about the complexity of the mentioned problems. The following simple theorem might be considered as part of folklore.. 123.

(8) 532. F. Grandoni et al.. Theorem 2.1 For k ≥ 2, it is N P-complete to decide whether there is a feasible solution for k-budgeted shortest path, k-budgeted perfect matching and k-budgeted spanning tree (and hence also for k-budgeted matroid basis and k-budgeted matroid intersection basis). Proof It is sufficient to prove the claim for k = 2. Consider first 2- budgeted spanning tree: the claim for k-budgeted matroid basis and, consequently, for kbudgeted matroid intersection basis trivially follows. Let P + denote our (feasibility) problem, and P ± its variant with arbitrary (i.e., positive and/or negative) lengths. Of course, P ± includes P + as a special case. To see the opposite reduction, observe that a spanning tree contains exactly n − 1 edges. Hence, by adding a sufficiently large value M to all the lengths, and adding (n − 1)M to the budgets, one obtains an equivalent problem with non-negative lengths. It is easy to see that P ± includes as a special case the problem P = of determining, for a given length function  (·) and target L , whether there exists a spanning tree T of length  (S) = L : a reduction is obtained by setting 1 (·) = −2 (·) =  (·) and L1 = −L2 = L . Hence it is sufficient to show that P = is NP-complete. We do that via the following reduction from partition: given α1 , α2 , . . . , αq ∈ Q and a target A ∈ Q, determine whether there exists a subset of αi ’s of total value A. Consider graph G q , consisting of q cycles C1 , C2 , . . . , Cq , with Ci = (ai , bi , ci , di ) and ci = ai+1 for i = 1, 2, . . . , q − 1. Let  (ai bi ) = αi , i = 1, 2, . . . , k, and set to zero all the other lengths. The target is L = A. Trivially, for each spanning tree T and each cycle Ci , the length of T ∩ Ci is either 0 or αi . Hence, the answer to the input partition problem is yes if and only if the same holds for the associated instance of P = . Consider now 2- budgeted perfect matching. Since each perfect matching contains exactly n/2 edges, with the same argument and notation as above it is sufficient to prove the N P-completeness of the problem P = of determining, for a given length function  (·) and target L , whether there exists a perfect matching M of length  (M) = L . We use a similar reduction from partition as above. The graph is again given by the cycles C1 , . . . , Cq . However, this time each cycle forms a distinct connected component. We use the same lengths  as above and we again set L = A. It is easy to see that, for each perfect matching M and each cycle Ci , the length of M ∩ Ci is either 0 or αi . The claim follows. Of course, an even simpler reduction is obtained when working with multigraphs, where the used graph can be reduced to distinct connected components each consisting of two parallel edges. Eventually consider 2- budgeted shortest path. We restrict our attention to the graph G q as used for the spanning tree reduction, and let (s, t) = (a1 , cq ). Since any s-t path in this graph uses exactly 2q edges, we have by the usual argument that it is sufficient to show the N P-completeness of the problem P = of determining, for a given length function  (·) and target L , whether there exists an s-t path P of length  (P) = L . The claim follows by essentially the same reduction as in the spanning tree case. . Corollary 2.2 Unless P = NP, there is no (α0 , α1 , . . . , αk )-approximation algorithm with two or more αi ’s equal to 1 for the problems in the claim of Theorem 2.1.. 123.

(9) New approaches to multi-objective optimization. 533. Proof Observe that one can exchange the roles of the objective function with any one of the budget constraints for the mentioned problems. The claim follows from Theorem 2.1. . 3 Multi-criteria approximation schemes In this section we present our multi-criteria approximation schemes, which slightly violate budget constraints. All these algorithms are based on iterative randomized rounding. We start with the matroid basis case and conclude the section with the much more involved algorithm for bipartite matching. 3.1 k-Budgeted matroid basis Consider the following linear programming relaxation (LP-MB) for the problem. There is a variable xe for each element e ∈ E. For any subset S ⊆ E, we denote x(S) =  e∈S x e . Here r denotes the rank function of the matroid M. (LP-MB). maximize. . w(e) xe. e∈E. subject to x(E) = r (E), x(S) ≤ r (S),  i (e)xe ≤ Li ,. ∀S ⊆ E ∀1 ≤ i ≤ k. e∈E. xe ≥ 0,. ∀ e ∈ E.. The polynomial time solvability of the linear program (LP-MB) follows from the polynomial time separation of the rank constraints [13]. The following characterization follows from a standard uncrossing argument. A proof is presented for completeness. We recall that a chain is a family F of sets such that for any F1 , F2 ∈ F, we have either F1 ⊆ F2 or F2 ⊆ F1 . Furthermore, for any S ⊆ E, we denote by χ (S) ∈ {0, 1} E the incidence vector of S, i.e., (χ (S))e = 1 for e ∈ S and (χ (S))e = 0 for e ∈ E\S. Lemma 3.1 Let x be a vertex solution of the linear program (LP-MB) such that xe > 0 for each e ∈ E and let T = {S ⊆ E | x(S) = r (S)} be the set of all tight subset constraints. Then there exists a chain C ⊆ T and a subset J ⊆ {1 ≤ j ≤ k |  i e∈E  (e)x e = Li } of tight length constraints such that 1. The vectors {χ (S) | S ∈ C} ∪ {i | i ∈ J } are linearly independent, 2. span({χ (S) | S ∈ C}) = span({χ (S) | S ∈ T }), 3. |C| + |J | = |E|. Proof Since (LP-MB) is a polytope with |E| variables, any vertex of (LP-MB) can be described as the intersection of |E| constraints of (LP-MB) that are linearly independent and tight with respect to x. Since xe > 0 for e ∈ E, the only constraints of (LPMB) that can be tight with respect to x are rank constraints, i.e., x(S) = r (S) for some. 123.

(10) 534. F. Grandoni et al.. Fig. 1 Algorithm for k-Budgeted Matroid Basis. set S ⊆ E, and length constraints. In general, there may be many choices of |E| linearly independent and tight constraints that define x. Actually, among all tight constraints with respect to x one can choose any maximal subset of linearly independent constraints to define x. We first choose a maximal number of linearly independent and tight rank constraints, which can be represented by a family of tight sets C ⊆ T . Notice that by choosing a maximal family we have span({χ (S) | S ∈ C}) = span({χ (S) | S ∈ T }). The chosen tight rank constraints define a face of the matroid polytope. By [38, p. 778], any face of the matroid polytope can be defined by a family of tight rank constraints corresponding to sets C that form a chain (together with possibly tight nonnegativity constraints, which does not apply to our case since x > 0). Hence, we can choose C to be a chain. We complete the chosen rank constraints with a maximal number of tight length constraints that are linearly independent among each other and with respect to the chosen rank constraints. We represent those  constraints by the indices of the chosen length constraints J ⊆ {1 ≤ j ≤ k | e∈E i (e)xe = L i }. Hence, {χ (S) | S ∈ F} ∪ {i | i ∈ J } are linearly independent vectors by constructions. Since they uniquely define x, we have |C| + |J | = |E|, thus completing the proof.  Consider the algorithm for k-Budgeted Matroid Basis in Fig. 1. We first perform a pruning step to guess all elements in the optimal solution with ith-length at least Lk i for any 1 ≤ i ≤ k. Then we solve the linear program (LP-MB) for the residual problem and remove all elements which the linear program sets to zero. We then select the maximum weight basis under weight function w ignoring the rest of the length functions. Observe that the last step is equivalent to relaxing all the k length constraints and solving the integral linear program for the matroid basis problem. Theorem 3.2 For any  > 0, there exists an algorithm for k-Budgeted Matroid Basis, k = O(1), which returns a basis B with i (B) ≤ (1 + )Li for each 1 ≤ i ≤ k, and w(B) ≥ w(O P T ), where O P T is a maximum-weight basis that satisfies all 2 length constraints. The running time of the algorithm is O(m O(k /) ). Proof Consider the algorithm described in Fig. 1, whose running time trivially satisfies the claim. First observe that the support of a vertex solution to (LP-MB) on a matroid with r (E) = n has at most n + k elements. In fact, from Lemma 3.1, we have |E| = |C| + |J |. But |C| ≤ r (E) since C is a chain and x(C) equals a distinct integer between 1 and r (E) for each C ∈ C. Also |J | ≤ k proving the claim. Let L i be the ith budget of the residual problem solved in step 2 of the algorithm. Observe that the weight of the basis returned is at least the weight of the LP-solution and hence is at. 123.

(11) New approaches to multi-objective optimization. 535. least w(O P T ). Now, we show that the ith-length is at most Li + Li . Observe that any basis must contain r (E) elements out of the r (E) + k elements in the support. Hence, the longest ith-length basis differs from the minimum ith-length basis by at most k · k Li = Li . But the minimum ith-length basis has ith-length at most the length . of the fractional basis which is at most Li . The claim follows. 3.2 k-Budgeted bipartite matching In this section we present a multi-criteria PTAS for k-Budgeted Bipartite Matching. We formulate the following linear programming relaxation (LP-BM) for the problem. We use δ(v) to denote the set of edges incident to v ∈ V . (LP-BM). maximize. . w(e) xe. e∈E. subject to. . xe ≤ 1,. ∀v ∈ V. e∈δ(v). . i (e)xe ≤ Li ,. ∀1 ≤ i ≤ k. e∈E. xe ≥ 0,. ∀ e ∈ E.. Consider the algorithm for k-Budgeted Bipartite Matching in Fig. 2. Our algorithm works in three phases. In the Preprocessing Phase, the algorithm guesses all the edges in O P T of weight at least δ w(O P T ) or ith-length at least δLi for some i. Here δ is a proper function of  and k. This guessing can be performed in time polynomial in n (but exponential in δ). The algorithm then includes all the guessed edges in the solution, and deletes the remaining heavy edges and all edges incident to vertices which have already been matched by guessed edges. It also reduces the Li ’s accordingly. After this phase w(e) ≤ δ w(O P T ) and i (e) ≤ δLi for each edge e. In the Decomposition Phase our algorithm computes over a series of pruning and iterative steps, a solution to the k-budgeted matching problem on a reduced graph that is eventually a collection of paths. In Step (c), we discard nodes of degree 0 or of degree 3 or higher so as to leave only paths and cycles; Finally, one edge from each cycle is removed in this step. In Step (e), we further break each path into subpaths of bounded total weight and length. This pruning is useful in the later Combination Phase when we choose one of the two matchings in each path: the bounded difference ensures that one such combination is near optimal. The use of vertex solutions in all the residual problems ensures that the total number of edges thrown away in all the above stages is roughly of the order of the √ extra budget constraints in the problem which is O(k/γ ) for a parameter γ  O(/ k). Finally, we output a feasible fractional vertex solution x g to the LP with the following properties: (1) The support of x g is a collection of vertex disjoint paths S1 , . . . , Sh where h ≤ k. (2) x g is a (1 + /4)-approximate solution.. 123.

(12) 536. F. Grandoni et al.. Fig. 2 Algorithm for k-Budgeted Bipartite Matching. (3) For each Si , the degree constraints of the vertices of Si are tight except for its endpoints. i g (4) For each Si , w · x g (Si ) ≤ γ w(O  √P T ) and  · x (S j ) ≤ γ Li for each 1 ≤ i ≤ k and 1 ≤ j ≤ h where γ = / 2 2k ln(k + 2) . In the final Combination Phase, the paths S1 , . . . , Sh are used to compute an approximate feasible (integral) solution. The algorithm enumerates over all the 2h matchings which are obtained by taking, for each Si , one of the two matchings which partition Si . This enumeration takes polynomial time since h ≤ k = O(1). A probabilistic argument is used to show that one of these matchings satisfies the claimed approximation guarantee of the algorithm. Analysis. We now analyze the three phases of the algorithm, bounding the corresponding approximation guarantee and running time. Consider first the Preprocessing. 123.

(13) New approaches to multi-objective optimization. 537. Phase. In order to implement Step (a), we have to consider all the possible choices, and run the algorithm for each choice. Observe that there are at most (k + 1)/δ such heavy edges in the √ optimal solution, and hence the number of possibilities is 2 2 O(m (k+1)/δ ) = O(m O(k k log k/ ) ). The algorithm generates a different subproblem for each possible guess of the edges. In the following we will focus on the run of the algorithm where the guessed edges correspond to an optimal solution. Consider now the Decomposition Phase. We prove that the output of this phase satisfies the four properties stated above. Observe that by construction the algorithm returns a collection of edge disjoint paths whose interior vertices have tight degree constraints. Properties (3) and (4) follow by construction. We now argue that the number of paths is bounded by k, proving Property (1). Lemma 3.3 The number h of subpaths in Step (g) is upper bounded by k. q Proof Consider the solution x f . The number of variables |E| = i=1 |Pi | is upper bounded by the number of tight constraints. Let q be the number of internal nodes whose matching constraint is not tight in x f . Note that the matching constraints at the endpoints of each path are not tight. Hence the number of tight constraints is at q most i=1 (|Pi | − 1) − q + k = |E| − q − q + k ≥ |E|, from which q + q ≤ k. Observe that, by definition, the number h of subpaths is exactly q + q (we start with q subpaths, and create a new subpath for each internal node whose matching constraint is not tight). The claim follows. . Clearly, solution x g satisfies all the constraints. We next argue that the weight of is nearly optimal. In Steps (c), (e) and (g) we remove a subset of edges whose optimal fractional value is larger than zero in the step considered. In the following lemma we bound the number of edges removed. Due to the Preprocessing Phase, the weight of these edges is negligible, which implies that the consequent worsening of the approximation factor is sufficiently small. This proves Property (2). xg. Lemma 3.4 The algorithm removes at most 1. 7k edges in Step (c); 2. (k + 1)/γ edges in Step (e); 3. 2k edges in Step (g). Proof (1) In the beginning of Step (c), all variables are strictly fractional. Thus, every vertex in V1 , the set of vertices with tight degree constraints, has degree at least two. Let E be the residual edges. Note that |E| ≤ |V1 | + k since the number of tight constraints is at most |V1 | + k. Let H be the set of nodes of degree at least 3. Observe that   deg(v) ≥ 2 2(|V1 | + k) ≥ 2|E| ≥ +.  v∈H. v∈V1. deg(v) ⇒. v∈V1 \H. . deg(v) ≤ 2|H | + 2k.. v∈H. Since deg(v) ≥ 3 for each v ∈ H we have |H | ≤ 2k. Thus. . v∈H. deg(v) ≤ 6k.. 123.

(14) 538. F. Grandoni et al.. After removing nodes of degree 0 and at least 3, the graph consists of a set of paths and cycles. Let C1 , C2 , . . . , Cq be the set of cycles, and P1 , P2 , . . . , Pr be the set of paths. We next show that q ≤ k with an analogous counting argument, and hence at most k more edges are removed. Since G is bipartite, each cycle Ci must be even and therefore the corresponding matching constraints must be dependent. Moreover, the endpoints of each path cannot correspond to tight matching Thus the total number of edges over all such even cycles is qconstraints. |E| = i=1 |Ci | + rj=1 |P j | while the number of tight and independent degree  q constraints and budget constraints is at most i=1 (|Ci | − 1) + rj=1 (|P j | − 1) + k = |E| − q − r + k. Since the number of variables is at most the number of tight and independent constraints at any vertex solution, we obtain that the number of cycles and paths altogether is at most k. (2) Each minimal subpath P considered in Step (e) satisfies either w(P ) > γ w(x d ) or i (P ) > γ i (x d ) for some i. Since the edges of P are not considered any more in the following iterations of Step (e), the condition w(P ) > γ w(x d ) can be satisfied at most 1/γ times. Similarly for the condition i (P ) > γ i (x d ). It follows that the number of minimal subpaths, and hence the number of edges remove, is upper bounded by (k + 1)/γ . (3) Let V be the set of internal vertices for which the degree constraints are not tight in the paths P1 , . . . , Pq and let V1 be the set of vertices with tight degree constraints. Thus, we have deg(v) ≥ 2 for each v ∈ V1 ∪ V . But the total number of edges is at most |V1 | + k. Thus we have |V | ≤ k. We remove exactly two edges for each vertex in V obtaining the claimed bound. . Each of the steps (b) to (g) is run polynomially many times and takes polynomial time. Hence the overall running time of the Decomposition Phase is polynomial. Consider eventually the Combination Phase. As described earlier, the running time of this phase is bounded by O(2k n O(1) ). The following lemma, which is the heart of our analysis, shows that a subset M satisfying Properties (i) and (ii) defined in the Combination Phase (h) always exists. Henceforth the algorithm always returns a solution. Although we use a randomized argument to prove the lemma, the algorithm is completely deterministic and enumerates over all solutions. Recall that M j and M¯ j are the two matchings which partition subpath S j . Lemma 3.5 In Step (h) there is always a set of edges M satisfying Properties (i) and (ii). Proof Consider the following packing problem (PACK ) maximize. h  (y j w(M j ) + (1 − y j ) w( M¯ j )) j=1. subject to. h  (y j i (M j ) + (1 − y j ) i ( M¯ j )) ≤ Li ,. ∀1 ≤ i ≤ k. j=1. y j ∈ {0, 1},. 123. ∀ 1 ≤ j ≤ h..

(15) New approaches to multi-objective optimization. 539. We can interpret the variables y j in the following way: M ∩ S j = M j if y j = 1, and M ∩ S j = M¯ j otherwise. Given a (possibly fractional and infeasible) solution y  to PACK, we use w(y) and i (y) as shortcuts for hj=1 (y j w(M j ) + (1 − y j ) w( M¯ j )) h and j=1 (y j i (M j ) + (1 − y j ) i ( M¯ j )), respectively. We first show that the solution x g can be interpreted as a feasible solution y g to the linear relaxation of PACK as follows. Consider each subpath S j . By definition, each matching constraint at an internal node of S j is tight. This implies that all the edges e g g of M j (resp., M¯ j ) have the same value xe =: y g (resp., xe =: 1 − y g ). Thus, we have g g w(y ) = w(x ). Now, we construct an integral solution y in the following manner. Independently, g g for each path Si , select Mi with probability yi and M¯ i with probability 1 − yi . Note. g i. i g that E[w(y )] = w(y ) and E[ (y )] =  (y ) ≤ Li for all i. In order to prove the claim, it is sufficient to show that, with positive probability, one has w(y ) ≥ (1 − /2)w(x g ). and. l i (y ) ≤ (1 + /2)l i (x g ) for all i.. This implies that a matching satisfying (i) and (ii) always exists, and hence the algorithm will find it. By Step (e), switching one variable of y from 1 to 0 or vice versa can change the cost and ith-length of y at most by γ w(x g ) and γ i (x g ), respectively. Using the method of bounded differences (see, e.g., [29]): Pr (w(y ) < E[w(y )] − t) ≤ e. −. t2 2h(γ w(x g ))2. Pr (i (y ) > E[i (y )] + t) ≤ e. and. t2 − 2h(γ i (x g ))2. .. Recalling√ that E[w(y )] ≥ w(x g ), h ≤ k, and setting t = /2 · w(x g ) = γ w(x g ) 2k ln(k + 2),  Pr (w(y ) < w(x g )−/2 · w(x g )) ≤ Pr (w(y ) < E[w(y )] − γ w(x g ) 2k ln(k + 2)) ≤ e−(γ w(x. g ))2 2k ln(k+2)/2h(γ w(x g ))2. ≤ e− ln(k+2) =. 1 . k+2. Similarly, for all i, Pr (i (y ) > i (x g ) + /2 · i (x g )) ≤. 1 k+2 .. From the union bound, the probability that y does not satisfy Property (ii) is therefore k+1 at most k+2 < 1. The claim follows. . Theorem 3.6. For any  > 0, there exists a deterministic algorithm for k-Budgeted Bipartite Matching, k = O(1), which returns a matching M of weight w(M) ≥ (1 − )w(O P T ) and length√i (M) ≤ (1 + )Li for each 1 ≤ i ≤ k. The running time 2 2 of the algorithm is O(n O(k k log k/ ) ).. 123.

(16) 540. F. Grandoni et al.. Proof Consider the above algorithm, whose running time is trivially as in the claim. It is easy to see that the solution returned is a matching. Moreover a solution is always returned by Lemma 3.5. The approximation guarantee of the algorithm follows from the properties of the Decomposition step and Lemma 3.5. . 4 Pure approximation schemes for independence systems In this section we present our pure approximation schemes (which do not violate any budget constraint) when the solution space F is an independence system, i.e. if F ∈ F and F ⊆ F then F ∈ F. We start by describing our feasibilization mechanism to turn multi-criteria approximation schemes into pure approximation schemes. We then present our deterministic approximation scheme for matroid independent set. We conclude the section with a deterministic approximation scheme for matchings (in general graphs) with k budget constraints, where k = (1) as usual. 4.1 A feasibilization mechanism Since we deal with independence systems, minimization problems are trivial (the empty solution is optimal). Therefore, we will consider maximization problems only. Analogous to terminology used in matroid theory, for an independence system F on some ground set E and any I ∈ F, we call the independence system {S ∈ E\I | S ∪ I ∈ F} on ground set E\I a contraction of F. Similarly, for any I ⊆ E, the independence system {S ∈ E\I | S ∈ F} is called a restriction of F. Combination of contractions and restrictions are called minors. We say that a family F of independence systems is self-reducible if it is closed under taking minors. Self-reducibility is a natural property for independence system, examples include feasible solutions to knapsack problems, graphic matroids, linear matroids, matchings, and bipartite matchings. Theorem 4.1 (Feasibilization) Let F be a self-reducible family of independence systems. Suppose that we are given an algorithm A which, for any constant δ > 0 and k-budgeted optimization problem Pind on an independence system F ∈ F , computes in polynomial time a solution S ∈ F to the k-budgeted maximization problem on F of cost (resp., expected cost) at least (1 − δ) times the optimum in F, violating each budget by a factor of at most (1 + δ). Then there is a PTAS (resp., PRAS) for Pind .5 Proof Let ε ∈ (0, 1] be a given constant, with 1/ε ∈ N. Consider the following algorithm. Initially we guess the h = k/ε elements E H of O P T of largest weight, and reduce the problem consequently, hence getting a problem P . Then we scale down all the budgets by a factor (1 − δ), and solve the resulting problem P. by means of A, where δ = ε/(k + 1). Let E L be the solution returned by A. We finally output EH ∪ EL . 5 Notice that it suffices to assume that F is closed under contractions, since a restriction can be emulated. by setting the weights of the elements to be removed to zero.. 123.

(17) New approaches to multi-objective optimization. 541. Let OPT and OPT. be the optimum solution to problems P and P. , respectively. We also denote by Li and Li. the ith budget in the two problems, respectively. Let wmax be the largest weight in P and P. . We observe that trivially: (a) w(O P T ) = w(E H ) + w(O P T ) and (b) wmax ≤ w(E H )/ h. Let us show that (c) w(O P T. ) ≥ w(O P T )(1 − kδ) − kwmax . Consider the following process: for each length function i, we remove from O P T the element e with smallest ratio w(e)/i (e) until the remaining elements of OPT’ have ith length ≤ (1 − δ)Li . Let E i be the set of elements removed. More formally, we number the elements of O P T = {e1 , e2 , . . . , eq } such that w(e1 )/i (e1 ) ≤ w(e2 )/i (e2 ) ≤ · · · ≤ w(eq )/i (eq ). Hence, E i = {e1 , . . . , er }, where r ∈ {0, . . . , q} is the smallest index such that i ({er +1 , . . . , eq }) ≤ (1−δ)L i . We now show w(E i ) ≤ δw(O P T )+ wmax . This is trivially true if E i = ∅, hence, we assume without loss of generality r ≥ 1. Since i (O P T ) ≤ L i and r is the smallest index with i ({er +1 , . . . , eq }) ≤ (1 − δ)L i , or equivalently i ({e1 , . . . , er }) ≥ i (O P T ) − (1 − δ)L i , we get i ({e1 , . . . , er −1 }) < i (O P T ) − (1 − δ)L i ≤ δi (O P T ). Furthermore, notice that for any four reals A, B, a, b > 0 with a+A a b+B ≥ b . Applying this inequality repeatedly, we obtain. A B. ≥. (1) a b,. we have. w({e1 , e2 }) w(O P T ) w(e1 ) ≤ i ≤ ··· ≤ i , i  (e1 )  ({e1 , e2 })  (O P T ) and in particular w({e1 , . . . , er −1 }) w(O P T ) ≤ . i ({e1 , . . . , er −1 }) i (O P T ). (2). Hence, w(E i ) = w(er ) + w({e1 , . . . , er −1 }) (2). ≤ wmax + i ({e1 , . . . , er −1 }) ·. w(O P T ) i (O P T ). (1). ≤ wmax + δw(O P T ),. as claimed. It follows that O P T − ∪i E i is a feasible solution for P. of weight at least w(O P T )(1 − δk) − kwmax , proving (c). We observe that E L is feasible for P since, for each i, i (E L ) ≤ (1 + δ)Li. = (1 + δ)(1 − δ)Li ≤ Li . As a consequence, the returned solution E H ∪ E L is feasible. Moreover, when A is deterministic, we have. 123.

(18) 542. F. Grandoni et al.. w(E H ) + w(E L ) ≥ w(E H ) + (1 − δ)w(OPT. ) (c). ≥ w(E H ) + (1 − δ)(w(OPT )(1 − δk) − kwmax ). (b). ≥ (1 − k/ h)w(E H ) + (1 − δ(k + 1))w(OPT ) (a). ≥ (1 − ε)(w(E H ) + w(O P T )) = (1 − ε)w(OPT ). The same bound holds in expectation when A is randomized.. . Corollary 4.2 There are PTASs for k-budgeted forest and k-budgeted bipartite matching. There are PRASs for k-budgeted matching, k-budgeted matroid independent set, and k-budgeted matroid intersection in representable matroids. Proof The result about bipartite matching follows from the multi-criteria PTAS in previous section. All the other results follow from known multi-criteria PTASs and PRASs [5,9,10,30,32]. . 4.2 A PTAS for k-budgeted matroid independent set Again, we denote by r (S) = max{|J | | J ⊆ S, J ∈ I} the rank function of a matroid M = (E, I). Furthermore, PI = {x ≥ 0 | x(S) ≤ r (S) ∀S ⊆ E} denotes the matroid polytope which is the convex hull of the characteristic vectors χ I of the independent sets I ∈ I. Theorem 4.3 Let M = (E, I) be a matroid and let F be a face of dimension d of the matroid polytope PI . Then any x ∈ F has at most 2d non-integral components. Furthermore, the sum of all fractional components of x is at most d. Proof Let m = |E|. We assume that the matroid polytope has full dimension, i.e., dim(PI ) = m, or equivalently, every element e ∈ E is independent. This can be assumed w.l.o.g. since if {e} ∈ I for some e ∈ E, then we can reduce the matroid by deleting element e. By [38, p. 778], any d-dimensional face F of a polymatroid, which is a generalization of a matroid polytope, can be described as follows F = {x ∈ PI | x(e) = 0 ∀e ∈ N , x(Ai ) = r (Ai ) ∀i ∈ {1, . . . , k}}, where A1  A2  · · ·  Ak ⊆ E, and N ⊆ E with |N | + k = m − d. We prove the claim by induction on the number of elements of the matroid. The theorem clearly holds for matroids with a ground set of cardinality one. First assume N = ∅ and let e ∈ N . Let M be the matroid obtained from M by deleting e, and let F. be the projection of F onto the coordinates corresponding to N \{e}. Since F is a face of M , the claim follows by induction. Henceforth, we assume N = ∅ which implies k = m − d. Let A0 = ∅ and Bi = Ai \Ai−1 for i ∈ {1, . . . , k}. In the following we show that we can assume 0 < r (Ai ) − r (Ai−1 ) < |Bi | ∀ i ∈ {1, . . . , k}.. 123. (3).

(19) New approaches to multi-objective optimization. 543. Notice that 0 ≤ r (Ai ) − r (Ai−1 ) ≤ |Bi | clearly holds by standard properties of rank functions (see [38, p. 664] for more details). Assume that there is i ∈ {1, . . . , k} with r (Ai ) = r (Ai−1 ). Since all points x ∈ F satisfy x(Ai ) = r (Ai ) and x(Ai−1 ) = r (Ai−1 ), we have x(Bi ) = 0. Hence for any e ∈ Bi , we have x(e) = 0 for x ∈ F. Again, we can delete e from the matroid, hence obtaining a smaller matroid for which the claim holds by the inductive hypothesis. Therefore, we can assume r (Ai ) > r (Ai−1 ) which implies the left inequality in (3). For the right inequality assume that there is i ∈ {1, . . . , k} with r (Ai ) − r (Ai−1 ) = |Bi |. Hence, every x ∈ F satisfies x(Bi ) = |Bi |, implying x(e) = 1 for all e ∈ Bi . Let e ∈ Bi , and let F be the projection of the face F onto the components N \{e}. Since F is a face of the matroid M obtained from M by contracting e, the result follows again by the inductive hypothesis. Henceforth, we assumethat (3) holds. This implies in particular that |Bi | > 1 k |Bi | ≤ m, we have k ≤ m/2, which together with for i ∈ {1, . . . , k}. Since i=1 k = m − d implies d ≥ m/2. The claim of the theorem that x ∈ F has at most 2d non-integral components is thus trivial in this case. To prove the second part of the theorem we show that if (3) holds then x(E) ≤ d for x ∈ F. For x ∈ F we have x(E) = x(E\ Ak ) +. k . x(Bi ). i=1. ≤ |E| − |Ak | +. k . (r (Ai ) − r (Ai−1 )). i=1. ≤ |E| − |Ak | +. k . (|Ai | − |Ai−1 | − 1). i=1. = m − k = d, where the first inequality follows from x(E\ Ak ) ≤ |E\Ak | and x(Bi ) = r (Ai ) − . r (Ai−1 ), and the second inequality follows from (3). Exploiting Theorem 4.3, we suggest in Fig. 3 a conceptually simple PTAS for multi-budgeted optimization over the independent sets of a matroid.. Fig. 3 A PTAS for k-budgeted matroid independent set. 123.

(20) 544. F. Grandoni et al.. Corollary 4.4 The algorithm presented in Fig. 3 is a PTAS for k-budgeted matroid independent set. Proof The guessing step guarantees that the maximum weight wmax of an element in the reduced problem satisfies kwmax ≤ εw(E H ). Since x ∗ is chosen to be a vertex solution, and only k linear constraints are added to the matroid polytope, x ∗ lies on a face of the matroid polytope PI of dimension at most k. By Theorem 4.3, the sum of all fractional components of x ∗ is at most k, i.e., x ∗ (E \E L ) ≤ k. Hence, w(E L ) =.  e∈E. ≥. . . w(e)x ∗ (e) −. e∈E \E L. w(e)x ∗ (e) − wmax. e∈E. ≥. . w(e)x ∗ (e) . e∈E \E. x ∗ (e) L. ∗. w(e)x (e) − kwmax .. e∈E. Furthermore, since we solved a relaxation of the original problem, we have w(OPT) ≤ w(E H ) +. . w(e)x ∗ (e).. e∈E. Combining the above inequalities, and using kwmax ≤ w(E H ), we obtain w(E H ∪ E L ) ≥ w(E H ) +. . w(e)x ∗ (e) − kwmax. e∈E. ≥ w(OPT) − kwmax ≥ w(OPT) − δw(E H ) ≥ (1 − δ)w(OPT). . 4.3 A PTAS for k-budgeted matching In this section we present our PTAS for k-budgeted matching. We denote by M the set of incidence vectors of matchings. With a slight abuse of terminology we call the elements in M matchings. Let PM be the matching polytope. To simplify the exposition, it is convenient to consider weights w and lengths i for i ∈ {0, . . . , k} E . We denote by  = (1 , . . . , k ) the matrix whose ith sometimes as vectors in Q+ i column is  , and let L = (L1 , . . . , Lk )T be the vector corresponding to the budgets. Using this terminology, a feasible solution to the k-budgeted matching problem is a matching x ∈ M such that T x ≤ L. We will first describe a procedure that returns a feasible matching of weight at least 2 w(OPT) − (k+3)k k+1 wmax , where wmax = max{w(e) | e ∈ E}. Similar to the algorithms seen in previous sections, it then suffices to perform a preprocessing step to guess the. 123.

(21) New approaches to multi-objective optimization. 545. Fig. 4 Obtaining a feasible matching y ∈ M for k-budgeted matching with wT y ≤ w(O P T ) − (k+3)k 2 k+1 wmax.  (k+3)k (k+1)  heaviest edges of the optimal solution. Our algorithm starts with an optimal basic solution x ∗ to the natural LP relaxation max{wT x | x ∈ PM , T x ≤ L}, rewrites x ∗ as a convex combination of at most k+1 matchings that are then successively merged to obtain a feasible matching. The procedure, modulo the sub procedure Merge, is described in Fig. 4. Notice that since x ∗ is a vertex of the polytope PM with k additional linear constraints, it lies on a face of PM of dimension at most k. Hence, by Carathéodory’s Theorem, x ∗ can indeed be expressed as a convex combination of at most k + 1 matchings as done in step 2 of the algorithm. Such a decomposition of x ∗ can be obtained efficiently by standard techniques (see for example [37]). Notice that if fewer than k + 1 matchings are needed in the decomposition of x ∗ , then this corresponds to having some of the α j equal to zero. The procedure Merge(λ, y, μ, x) takes four arguments, where λ, μ ≥ 0, λ+μ = 1 and x, y ∈ M are matchings, and returns in polynomial time a matching y that satisfies conditions (a) and (b) highlighted in the algorithm, i.e, (i )T y ≤ (i )T (λy + μx) for i ∈ {1, . . . , k} and wT y ≥ wT (λy + μx) − 2kwmax . Intuitively, the Merge procedure takes a point z = λy + (1 − λ)x on an edge of the matching polytope PM and returns a matching y with weight and lengths similar to z. We give the details of Merge later, and first show the following, assuming that Merge returns a matching with the properties described in the algorithm. 2. Theorem 4.5 Given an efficient Merge procedure fulfilling the requirements described in the algorithm in Fig. 4, the algorithm in Fig. 4 is an efficient procedure that returns a feasible matching y for k-budgeted matching with wT y ≤ 2 w(O P T ) − (k+3)k k+1 wmax . Proof The algorithm is clearly efficient.. 123.

(22) 546. F. Grandoni et al.. To prove feasibility of y = yk+1 , we fix i ∈ {1, . . . , k} and show by induction on j ∈ {1, . . . , k + 1} that ⎛. ⎞ j  1 ( ) y j ≤ ( ) ⎝ αr xr ⎠. βj i T. i T. (4). r =1. Feasibility then follows from feasibility of x ∗ since (4). ( ) yk+1 ≤ ( ) i T. k+1 . 1. i T. βk+1.

(23) αr xr. = (i )T x ∗ ≤ L i .. r =1. Since y1 = x1 , (4) trivially holds for j = 1. Furthermore, let j ∈ {1, . . . , k} and assume that (4) holds for any value less or equal to j. Then using property (a) of Merge, we obtain (a). (i )T y j+1 ≤ (i )T z j+1 ⎛ ⎞.  j+1  ind. hyp. (4) α β 1 j j+1 = (i )T yj + x j+1 ≤ (i )T ⎝ αr xr ⎠, β j+1 β j+1 β j+1 r =1. thus proving (4) and implying feasibility of y. Similarly, to prove that y has large weight, we show by induction on j ∈ {1, . . . , k + 1} that ⎛ ⎛ ⎞ ⎛ ⎞ ⎞ j j  1 ⎝ T ⎝ αr xr ⎠ − ⎝ βr ⎠ 2kwmax ⎠. w w yj ≥ βj T. r =1. (5). r =2. The desired result on the weight of y = yk+1 then follows by observing that r , since α1 ≤ · · · ≤ αk+1 , and βr ≤ k+1 (5). w yk+1 ≥ w T. T. k+1 .

(24) αr xr. r =1. ≥ wT x ∗ −. k+1  r =2. = w(O P T ) −. 123. −. k+1 .

(25) βr 2kwmax. r =2.

(26) r 2kwmax k+1. (k + 3)k 2 wmax . k+1.

(27) New approaches to multi-objective optimization. 547. Again, (5) is trivially true for j = 1 since y1 = x1 . Furthermore, for j ∈ {1, . . . , k} we obtain the following by using property (b) of Merge:   (b) 1 wT y j+1 ≥ wT z j+1 − 2kwmax = wT β j y j + α j+1 x j+1 − 2kwmax β j+1 ⎛ ⎛ ⎞ ⎛ ⎞ ⎞ j j  ind. hyp. (5) 1 ⎝ T ⎝ w ≥ αr xr ⎠ − ⎝ βr ⎠ 2kwmax + wT (α j+1 x j+1 )⎠ β j+1 r =1 r =2 ⎛ ⎛ ⎞ ⎛ ⎞ ⎞ j+1 j+1  1 ⎝ T ⎝ −2kwmax = αr xr ⎠ − ⎝ βr ⎠ 2kwmax ⎠. w β j+1 r =1. r =2. . Hence, it remains to present an efficient Merge procedure fulfilling property (a) and (b). 4.3.1 Merging procedure Consider fixed input parameters x , x. ∈ M and α ∈ [0, 1] of Merge(α, x , 1−α, x. ). Merge works in two steps. First, it constructs a fractional point y ∈ [0, 1] E that is structurally close to being a matching in a well-defined sense, and satisfies wT y = wT (αx + (1 − α)x. ), and (i )T y = (i )T (αx + (1 − α)x. ) ∀i ∈ {1, . . . , k}.. (6) (7). In a second step y is transformed to a matching with the desired properties. More precisely, we want y to be a 2k-almost matching, defined as follows. Definition 4.6 (r -almost matching) For r ∈ N, a vector y ∈ [0, 1] E is an r-almost matching in G = (V, E) if it is possible to set at most r components of y to zero to obtain a matching. We denote by Mr the set of all r -almost matchings. Given an r -almost matching y, we say that z is a corresponding matching (to y), if it is a matching obtained by setting at most r components of y to zero. Such a corresponding matching z can easily be found by first setting all fractional components of y to zero, and then computing a maximum cardinality matching in the resulting set of edges. Clearly, a corresponding matching z satisfies wT z ≥ wT y − r wmax . Hence, to obtain a Merge procedure with the desired properties, it indeed suffices to present an efficient procedure to construct a 2k-almost matching y that satisfies (6) and (7): a corresponding matching z clearly has lower lengths, since z ≤ y, and wT z ≥ wT y − 2kwmax = wT (αx + (1 − α)x. ) − 2kwmax , by the above observation and (7). Hence, we now focus on efficiently getting a 2kalmost matching y satisfying (6) and (7).. 123.

(28) 548. F. Grandoni et al.. To construct y, we start with x and define a way to fractionally swap parts of with x. . For this we define a generalized symmetric difference. For two vectors. z , z. ∈ [0, 1] E , we define their symmetric difference z z. ∈ [0, 1] E by (z z. )(e) = |z (e) − z. (e)| for all e ∈ E. In particular, if z and z. are incidence vectors, then their symmetric difference as defined above corresponds indeed to the symmetric difference in the usual sense. Recall that, when z and z. are matchings, z z. consists of a set of node-disjoint paths and cycles. Consider the paths and cycles in s = x x. . We number the edges {e0 , . . . , eτ −1 } in s such that two consecutively numbered edges are either consecutive in some path/cycle or belong to different paths/cycles. This can easily be achieved by cutting each cycle, appending the resulting set of paths one to the other, gluing together the endpoints of the obtained path, and then number consecutively the edges in the obtained path. For t ∈ [0, τ ], we define s(t) ∈ [0, 1] E as x. ⎧ ⎪ if e = ei , i ∈ {0, . . . , t}; ⎨1 (s(t))(e) = t − t if e = ei , i = t and i ≤ τ − 1; ⎪ ⎩ 0 otherwise. Furthermore, for a, b ∈ [0, τ ] we define s(a, b) ∈ [0, 1] E as follows  s(a, b) =. s(b) − s(a) if a ≤ b, s(a) + s(τ ) − s(b) if a > b.. Hence, s(a, b) can be interpreted as a fractional version of a cyclic interval of edges in s. We denote by [a, b], the cyclic interval contained in [0, τ ], which is defined by  [a, b] =. [a, b] if a ≤ b, [0, a] ∪ [b, τ ] if a > b.. Notice that for a, b ∈ [0, τ ], the vector y = x s(a, b) is a 2-almost matching since to obtain a matching, it suffices to set the coordinates of y that correspond to ea and eb to zero. Analogously, for any family of k disjoint cyclic intervals [a1 , b1 ] . . . [ak , bk ] ⊆ [0, τ ], the vector y = x s(a1 , b1 ) . . . s(ak , bk ). (8). is a 2k-almost matching. We will find k disjoint cyclic intervals such that y as defined in (8) satisfies (6) and (7). We first prove non-constructively the existence of intervals [a1 , b1 ], . . . , [ak , bk ] that lead to a desired y, using the following slight generalization of a Theorem of Stromquist and Woodall [40]. In the theorem below, a signed measure denotes the difference between two finite measures. Furthermore, we call a measure non-atomic if every singleton has measure zero.. 123.

(29) New approaches to multi-objective optimization. 549. Theorem 4.7 Let d ≥ 2, and let μ1 , . . . , μd be non-atomic signed measures on [0, 1]. For each λ ∈ [0, 1] there is a set K λ ⊆ [0, 1] that is the union of at most d − 1 circular intervals of [0, 1] and satisfies μi (K λ ) = λμi ([0, 1]) ∀i ∈ {1, . . . , d}. The difference between Theorem 4.7 and [40, Theorem 1] is that we allow for signed measures instead of usual measure. Even though Theorem 4.7 is a direct generalization of Stromquist and Woodall’s Theorem, for completeness, we provide a proof in Sect. 4.3.2. Notice that Theorem 4.7 could be equivalently stated for the interval [0, τ ] instead of [0, 1]. Based on Theorem 4.7 we obtain the following. Theorem 4.8 There exist k disjoint cyclic intervals [a1 , b1 ], . . . , [ak , bk ] ⊆ [0, τ ], some of which may be empty, such that the vector y = x s(a1 , b1 ) . . . s(ak , bk ) satisfies (6) and (7). Proof We define the following k + 1 signed measures μ1 , . . . , μk+1 on [0, τ ] by providing their values on subintervals of [0, τ ]. For [a, b) ⊆ [0, τ ], let μi ([a, b)) = (i )T (x s(a, b) − x ) ∀i ∈ {1, . . . , k}, μk+1 ([a, b)) = wT (x s(a, b) − x ). In particular, for any cyclic interval [a, b] ⊆ [0, τ ], μi ([a, b]) and μk+1 ([a, b]) measure the change in the ith length and weight, respectively, when replacing x by x s(a, b). More generally, for k disjoint cyclic intervals [a1 , b1 ], . . . , [ak , bk ] and the vector y = x s(a1 , b1 ) . . . s(ak , bk ),. (9). we have (i )T y = (i )T x + μi (∪kj=1 [a j , b j ]) i ∈ {1, . . . , k}, wT y = wT x + μk+1 (∪kj=1 [a j , b j ]).. (10). Furthermore, it is easy to check that μi for i ∈ {1, . . . , k + 1} are non-atomic signed measures. Hence, using Theorem 4.7 with λ = 1 − α, there exist disjoint circular intervals [a1 , b1 ], . . . , [ak , bk ] ⊆ [0, τ ] such that μi (∪kj=1 [a j , b j ]) = (1 − α)μi ([0, τ ]) ∀i ∈ {1, . . . , k + 1}. Consider y being defined as in (9) for those intervals. Then combining the above equality with (10) we obtain (i )T y = (i )T x + (1 − α)μi ([0, τ ]) = (i )T x + (1 − α)(i )T (x. − x ) = (i )T (αx + (1 − α)x. ), ∀i ∈ {1, . . . , k},. 123.

(30) 550. F. Grandoni et al.. and similaly, wT y = wT x + (1 − α)w([0, τ ]) = wT x + (1 − α)wT (x. − x ) = wT (αx + (1 − α)x. ), . as desired.. Theorem 4.9 A set of k cyclic intervals as described in Theorem 4.8 can be found efficiently. Proof Notice that we can efficiently guess a j  and b j  for j ∈ {1, . . . , k} for a set of cyclic intervals [a1 , b1 ], . . . , [ak , bk ] ⊆ [0, τ ] guaranteed to exist by Theorem 4.8. More precisely, there are only O(τ 2k ) possibilities. Since τ is bounded by the number of vertices and k is a constant, this is polynomial. Futhermore, when the integer part of ai , bi for j ∈ {1, . . . , k} is fixed, then the vector y = x s(a1 , b1 ) . . . s(ak , bk ) is linear in the a j ’s and b j ’s. Therefore (i )T y for i ∈ {1, . . . , k} and wT y are linear in the a j ’s and b j ’s. Hence, finding values of the a j ’s and b j ’s, with predetermined integer parts, such that the resulting y satisfies (6) and (7), reduces to solving a constant-size linear problem, which can be done in constant time. . Summarizing the above discussion, we get the following. Corollary 4.10 There is an efficient procedure Merge(α, x , 1 − α, x. ), that for α ∈ [0, 1], and x , x. ∈ M, outputs a matching y satisfying (6) and (7). This finishes the description of the algorithm in Fig. 4 and implies together with Theorem 4.5 the following. Corollary 4.11 The algorithm described in Fig. 4 is an efficient procedure that returns a feasible matching y for k-Budgeted Matching with wT y ≥ w(OPT ) − (k+3)k 2 k+1 wmax . To obtain a PTAS for k-budgeted matching, it suffices to guess the  (k+3)k (k+1)  heaviest edges of an optimal solution before applying the algorithm in Fig. 4. 2. Theorem 4.12 Let  > 0. The following procedure is an efficient (1 − )approximation for k-budgeted matching. 1. Guess the  (k+3)k (k+1)  heaviest edges E H of an optimal solution, and reduce the problem correpondingly. 2. Apply the algorithm in Fig. 4 to the reduced problem to get a matching E L . 3. Return E H ∪ E L . 2. 123.

(31) New approaches to multi-objective optimization. 551. Proof Efficiency and feasibility is obvious. We have to show w(E H ∪ E L ) ≥ (1 −. be the maximum weight of an edge in the reduced problem. )w(O P T ). Let wmax Since, to build the reduced problem, we remove all edges of weight strictly larger then the smallest weight in E H , we have. wmax ≤ w(e) ∀e ∈ E H .. (11). Furthermore, let O P T be an optimal solution in the reduced problem. Clearly w(O P T ) = w(E H ) + w(O P T ).. (12). Hence, w(E H ∪ E L ) = w(E H ) + w(E L ). Cor. (4.11). ≥. w(E H ) + w(O P T ) −. (k + 3)k 2. wmax k+1. (11). ≥ w(E H ) + w(O P T ) − w(E H ) ≥ (1 − )(w(E H ) + w(O P T )). (12). = (1 − )w(O P T ). . For completeness, the following section shows how Theorem 4.7 is obtained by following the proof of Stromquist and Woodall [40, Theorem 1]. 4.3.2 A generalization of a theorem by Stromquist and Woodall To prove Theorem 4.7 we use the same proof technique as [40] with the only difference that we replace the classical version of the Ham Sandwich Theorem by the following generalization. Theorem 4.13 (Generalized Ham Sandwich Theorem) Suppose we are given d signed measures μ1 , . . . , μd on Rd that vanish on any hyperplane, then there exists a (possible degenerate) halfspace H in Rd such that μi (H ) =. 1 μi (Rd ) ∀i ∈ {1, . . . , d}, 2. where a degenerate halfspace is either ∅ or Rd . Theorem 4.13 with the additional requirement of the μi being absolutely continuous was proven in [2]. Their proof follows a standard approach used in [41] to show the classic Ham Sandwich Theorem for (unsigned) measures, which is based on the Borsuk-Ulam Theorem [3]. However, the proof presented in [2] actually shows Theorem 4.13, and was only stated in a slightly weaker form requiring absolute continuity of the measures. For completeness, we replicate the proof of Stromquist and Woodall, in a slightly generalized version that uses Theorem 4.13, to show Theorem 4.7.. 123.

(32) 552. F. Grandoni et al.. Proof (of Theorem 4.7, following [40]) We denote by A ⊆ [0, 1] the values of λ ∈ [0, 1] for which the statement is true. We have to show that A = [0, 1]. This will follow from the following four statements: (a) (b) (c) (d). 1 ∈ A, λ ∈ A ⇒ (1 − λ) ∈ A, λ ∈ A ⇒ 21 λ ∈ A, A is closed.. Statement (a) follows by choosing K 1 = [0, 1], and (b) by setting K 1−λ = [0, 1]\K λ . Furthermore,(d) follows by observing that the space of the unions of d − 1 intervals is compact in a suitable topology, hence, for any sequence of λ’s in A, the corresponding K λ ’s converge. It remains to prove (c). Fix λ ∈ A and let K λ ⊆ [0, 1] be a union of d − 1 cyclic intervals of [0, 1] such that μi (K λ ) = λμi (Rd ) for i ∈ {1, . . . , d}. If K λ = [0, 1], we can assume that the origin is not contained in K λ , for otherwise, we can simply reparameterize the circular interval [0, 1]. Let f : [0, 1) → Rn be defined by f (t) = (t, t 2 , . . . , t d ). We will apply the generalized Ham Sandwich Theorem 4.13 to a family of measures with support included in the image of f . More precisely, we define d measures ν1 , . . . , νd on Rd by νi (B) = μi ( f −1 (B) ∩ K λ ) ∀i ∈ {1, . . . , d}, B ⊆ Rd Borel set. Notice that νi (Rd ) = μi (K λ ) = λμi (Rd ), and νi are signed measures. Furthermore, for any hyperplane X in Rd , f −1 (X ) is a set of size at most d, and since the measures μi are non-atomic, this implies νi (X ) = 0. Hence, we can apply the Generalized Ham Sandwich Theorem 4.13 to obtain that there exists a halfspace H ⊂ Rd such that νi (H ) = λ2 μi (Rd ) for i ∈ {1, . . . , d}. Let X be the hyperplane that is the boundary of H . Consider the complementary halfspace H of H , i.e, H ∩ H = X . Since νi (Rd ) = λμi (Rd ) for i ∈ {1, . . . , d}, we also have νi (H ) = λ2 μi (Rd ) for i ∈ {1, . . . , d}. We will show that one can choose K λ to be either K H := f −1 (H )∩K λ 2. or K H := f −1 (H ) ∩ K λ . Notice that νi (H ) = νi (H ) = λ2 μi (Rd ) for i ∈ {1, . . . , d} can be rephrased as μi (K H ) = μi (K H ) =. λ μi (Rd ) ∀i ∈ {1, . . . , d}. 2. Hence, both K H and K H have the desired mass with respect to the measures μ1 , . . . , μd . It remains to show that one of them is a union of at most d intervals. Since | f −1 (X ) ∩ K λ | ≤ d, the ≤ d intervals of K λ are hit in at most d points by f −1 (X ), thus subdividing K λ into at most 2d + 1 intervals. These intervals are partitioned by K H and K H . Hence, indeed, either K H or K H is the union of at most d intervals. . 123.

Figure

Fig. 3 A PTAS for k-budgeted matroid independent set
Fig. 4 Obtaining a feasible matching y ∈ M for k-budgeted matching with w T y ≤ w ( O P T ) − (k+3)k 2

Références

Documents relatifs

For multi-objective multi-armed bandit problem (MOMAB), we propose an effective and theoretically moti- vated method to identify the Pareto front of arms.. We in particular show that

The proposed approach encompasses such traditional ROC curve optimization methods by searching for the set of optimal classifiers over the parameter space (FA,FR), us- ing a

Pour commencer, vous devez Ajouter un contrôle ShockwaveFlash dans la feuille de calcul ou dans un UserForm, afin de visualiser les animations Flash.. I-A - Ajouter un

The northeastern margin of the Gulf of Aden offers the opportunity to study on land the deformation associated with oblique rifting over a wide area encompassing two segments of

L’ensemble de ces questions nous ont amenés à nous questionner sur notre future profession en tant que jeune enseignant puisque nous suivons actuellement la formation au sein de notre

For this variant, the number of colonies #Col is set to m + 1 and the number of pheromone structures is set to m, where m = |F | is the number of objectives to opti- mize: each

This technique known as scalar approach is transforming a MOP into a single-objective problem, by combining different objective functions (i.e. the traffic demand satisfaction

In this paper we present a literature review of the PDPTW followed our multi-objective approach, minimizing the compromise between total travel cost and total