N
◦
attribué par labibliothèque
Thèse
Pour l'obtention du titre de
DOCTEUR EN INFORMATIQUE
(Arrêté du 7août 2006)
Optimisation ombinatoire et environnements
dynamiques
Candidat: Ni olas BORIA
JURY
Dire teurde thèse: Vangelis Th. PASCHOS
Professeur àl'UniversitéParisDauphine
Rapporteurs: Christian LAFOREST
Professeur àl'UniversitéBlaise Pas al, Clermont-Ferrand
Denis TRYSTRAM
Professeur àl'Universitéde Grenoble
Surageants: Giorgio AUSIELLO
Professeur àl'UniversitéLa Sapienza, Rome, Italie
Cé ile MURAT
Maîre de onféren e àl'Université ParisDauphine
Peter WIDMAYER
Introdu tion 7
1 Dynami environments: State of the Art 11
Résumé . . . 11
1.1 Introdu tion . . . 14
1.2 Dynami optimization and reoptimization . . . 14
1.2.1 Introdu tion . . . 14
1.2.2 Fully dynami algorithms . . . 16
1.2.3 Reoptimization: Generalproperties . . . 24
1.2.4 Reoptimizing NP-hard optimizationproblems . . . 33
1.2.5 Reoptimizationof vehi le routingproblems . . . 37
1.2.6 Con lusion. . . 43
1.3 Probabilisti CombinatorialOptimization. . . 43
1.3.1 Introdu tion . . . 43
1.3.2 Motivations and Appli ations . . . 45
1.3.3 Post-optimizationand apriori optimization . . . 46
1.3.4 Formalismand obje tive fun tion . . . 49
1.3.5 Main methodologi alissues . . . 52
1.3.6 Con lusion. . . 58
1.4 Con lusion . . . 58
2 Reoptimizationof Min Spanning Tree 61 Résumé . . . 61 2.1 Introdu tion . . . 64 2.2 Node insertions . . . 65 2.2.1 AlgorithmREOPT1+ . . . 65 2.2.2 AlgorithmREOPT2+ . . . 73 2.3 Node deletions. . . 74 2.3.1 AlgorithmREOPT1- . . . 74 2.3.2 AlgorithmREOPT2- . . . 84 2.4 Con lusion . . . 84
3 Reoptimization of hereditary problems 87
Résumé . . . 87
3.1 Introdu tion . . . 90
3.2 Preliminaries . . . 90
3.3 Node insertion. . . 92
3.3.1 Max Independent set . . . 92
3.3.2 Max
k
- olorable Subgraph. . . 943.3.3 Max
Pk
-free Subgraph . . . 973.3.4 Max Split Subgraph . . . 110
3.3.5 Max PlanarSubgraph . . . 118
3.4 Node deletion . . . 120
3.4.1 Max Independent set . . . 121
3.4.2 Generalnegativeresult . . . 122
3.4.3 Max
k
- olorable Subgraph. . . 1233.4.4 Restri tion tographs of bounded degree . . . 124
3.5 Con lusion . . . 127
4 Probabilisti Min Spanning Tree 129 Résumé . . . 129
4.1 Introdu tion . . . 132
4.2 Preliminaries . . . 133
4.3 Reoptimization and Min Spanning Tree . . . 134
4.4 Probabilisti Min Spanning Treeunder CLOSEST_ANCESTOR . . . 136
4.4.1 The omplexity of Probabilisti Min SpanningTree . . . 138
4.4.2 Stability of the solutions . . . 143
4.4.3 Probabilisti Metri Min Spanning Tree . . . 146
4.5 Probabilisti Min Spanning Treeunder modi ation strategy ROOT . 154 4.5.1 The omplexity of Probabilisti Min SpanningTree . . . 154
4.5.2 Stability of the solutions . . . 155
4.5.3 ROOTvs. CLOSEST_ANCESTOR . . . 156
4.6 Con lusion . . . 158
5 Probabilisti Dominating Set 159 Résumé . . . 159
5.1 Introdu tion . . . 161
5.2 Modi ation strategy and Fun tional . . . 161
5.2.1 The modi ation strategy M... . . 162
5.2.2 ...andthe omputationof its fun tional . . . 162
5.3 Approximation results . . . 163
5.3.1 Preliminaries . . . 164
5.3.2 Minimaldominating set
S
as anti ipatorysolution. . . 1645.3.3 Optimaldominatingset
S
∗
asanti ipatory solution . . . 1665.4 Polynomialsub ases . . . 167
5.4.2 Cy le Probabilisti Dominating Set . . . 174
5.4.3 Bounded Degree Tree Probabilisti Dominating Set . . 177
5.4.4 Equi Tree probabilisti Dominating Set . . . 180
5.5 Con lusion . . . 187
Con lusion & Perspe tives 189
Con lusion. . . 189
Complexités etapproximabilitésde
Π
etRΠ
. . . 190Complexités de
Π
etP Π(
A)
. . . 190Réoptimisationet optimisationprobabiliste : deux adres ompleméntaires 191
Perspe tives . . . 192
Problèmes . . . 192
Modèles dynamiques : Généralisationsetvariantes . . . 192
A General Denitions 195
Cette thèseintitulée optimisation ombinatoireetenvironnementsdynamiques s'ins rit
dans le domaine de l'informatique, et plus parti ulièrement dans les thématiques de
l'optimisation ombinatoire et de la théorie de la omplexité algorithmique. Nous nous
intéressons i i aux méthodes de résolution de problèmes mathématiques, plus
ommuné-mentappeléesalgorithmes,quipermettentdetrouver, parmiunensemblenidesolutions,
elle quivaoptimiser(maximiserouminimiserselonles as)un ertain ritère. La
déni-tion detels algorithmesestfondamentale, puisqu'ellevapermettrede résoudreun ertain
nombre de problèmes quiémergent dans les domaines de l'informatique oude l'industrie
(optimisation de réseaux, de haines logistiques, ordonnan ement de tâ hes, et )
Si es problèmes omptent des nombres nis de solution, e n'est pas pour autant
que la puissan e de al ul des ordinateurs modernes permet de les résoudre de manière
e a e. Eneet, omptetenudela omplexitéde ertainsproblèmesposés, ilestsouvent
né essaire d'arbitrer entre laqualité de solutionet letemps de al ul pour trouver ladite
solution. Ainsi,àdéfautde trouverlasolutionoptimaleauregarddu ritère, ommenous
l'avons évoqué plus haut - e qui né essiteraitun temps de al ul beau oup trop long - ,
on préferera dénir une solution dite appro hée, qui quant à elle pourra être trouvée
dans un temps raisonnable. Il est ommunément admis qu'un temps raisonnable peut
être déni ommeun tempsbornépar une fon tion polynomialeen lataillede l'instan e
du problème àtraiter, en termes de nombre d'opérationsélémentaires.
De telles méthodes s'ins rivent dans lathématique de l'approximation polynomiale.
Si le problème du temps de al ul est pour le moins prégnant en optimisation
ombi-natoire,et orrespond àdesproblématiquesréellesdanslesdomainesd'appli ation
indus-triels (où on onsidère souvent qu'uneinformationlégèrementapproximativeaujourd'hui
vaut mieuxqu'uneinformationexa te demain),unautreproblèmedetailleémanedes
ap-pli ationsréelles des algorithmes onçus : elui du ara tère dynamique de l'information
à traiter. Comment modéliser une panne sur un réseau ? un nouveau lient à intégrer
dans une tournée de livraison ? une nouvelle tâ he à traiter dans un ordonnan ement ?
Doit-onobligatoirement onsidérertous es problèmes perturbés ommedes problèmes
omplètementnouveaux, etlestraiter ommetels? Nepeut-onpastirer partidufaitque
es nouveaux problèmessonttrèssimilairesauxproblèmes debase, quisontdéjàtraités?
D'autre part,sil'ondispose, parexemplesur un réseauinformatique,de laprobabilitéde
réseau qui pourra être réoptimisé très e a ement ? Enn, omment traiter une
infor-mationquiest révélée en ontinu : est-onobligéd'attendrequetoutel'informationutile
soitrévélée pour ommen erà réerune solutionauproblème posé ? Ou ne peut-onpas,
pour haque nouvelle information qui est révélée, onstruire en ontinu notre solution,
l'obje tif restant de réduire le temps de al ul, en mettant à prot le temps qui s'é oule
entre deux informationsquiapparaissent?
Le premier type de questionnement évoqué renvoit à la problématique appelée
réop-timisation. Cette problématique fut introduite par C. Ar hetti, L. Bertazzi, et M. G.
Speranzadansun arti lepubliéen 2003[3℄,oùlestroisauteursseproposèrent d'analyser
le problème suivant : onnaissant une solution optimale sur une instan e
I
donnée du problèmedu voyageurde ommer e(TSP),est-ilpossiblede générer fa ilement(i.e.,plusfa ilement que si on ne onnaissait pas de solutionoptimale sur
I
) une solution sur une instan eI
′
qui serait une version légèrement perturbée (don très similaire) de
I
? Les perturbationsenvisagées étaient lasuppressionoul'insertiond'un uniquesommet,et desarêtes in identes. Les auteurs montrèrent que es deux problèmes sont NP- omplets, et
analysèrent don l'approximabilité en temps polynomial de es problèmes, e qui leur
permit d'exhiber un rapportd'approximation pour ha un d'eux.
Depuis etarti lefondateur, denombreuxproblèmesde la lasseNPontétésanalysés
sous l'angle de ette problématique, voire sous des angles légèrement diérents. On
dé-tailleral'ensemble des résultatsgénéraux etspé iques dans lasuite.
Dès que l'on ommen e à analyser les méthodes et résultats de réoptimisation, une
question s'impose d'elle-même : ompte tenu d'un algorithme de réoptimisation donné,
est-il vraiment né essaire et optimal de disposer d'une solutionoptimale sur l'instan e
dedépart? Etpouralleren oreplusloin: est- equed'autrestypesde solutiondedépart
ne seraient pas :
•
plus robustes en moyenne ou aupire des as aux perturbations?•
plus adaptées aux stratégies de réoptimisation envisagées ?•
plus fa ilement analysables?Onentre alorsdansun autretypede problématique,oùl'obje tifne devientplus de
réop-timiser aumieux une fois lesperturbationssurvenues, mais d'anti iperles perturbations
enenvisageantdessolutionsdedépartquipourrontêtrefa ilementadaptéesauxnouvelles
instan es.
On parle alors d'optimisation robuste, etd'optimisation probabiliste.
Enn,ettoujoursdansunsou iderépondreàdesproblématiquesréelles: sil'information
du problème à traiter n'arrive pas en un blo unique, mais par mor eaux, ave un
er-tain temps de laten eentre haque mor eau révélé, peut-on mettre à prot e temps de
pour ommen erà al ulerune solution.
On parle i id'optimisation on-line,et d'optimisation fully dynami .
Toutes es problématiques permettent de modéliser e que nous nommons
environ-nementsdynamiques,etdénissentdesversionsdynamiquesdesproblèmesd'optimisation
lassiques(i.e.statiques),quinediérentdeleurséquivalentsstatiquesquedansl'information
disponible aumoment de la prise de dé ision.
Ainsi, lesproblèmesdynamiquespeuventêtre lassés endeux grandsensembles: eux
où le ara tère dynamique de l'informationreprésente un surplus d'information par
rap-port au problème statique, et eux où le ara tère dynamique représente une aren e
d'information.
Par exemple, la réoptimisation appartient au premier ensemble, dans la mesure où
l'obje tif est de générer une solution sur une instan e
I
′
parfaitement onnue, et l'on
disposeenplusd'unesolution(souventsupposéeoptimale)suruneinstan e
I
trèssimilaire àI
′
. La pertinen e de ette informationsupplémentaire peut être variable, mais aupire
des as,l'informationn'estd'au uneutilité,etleproblèmedynamiqueestaussi ompliqué
à resoudre que sa version statique. Il en va de même pour l'optimisation fully dynami ,
dont laréoptimisationest un as parti ulier.
A l'inverse,l'optimisationprobabilisteappartientause ondensemble, dans lamesure
où l'obje tif est de générer une solution sur une instan e
I
′
que l'on ne onnaît que de
manière par ellaire : on sait que
I
′
onstitue une sous-instan e d'une instan e générale
I
, parfaitement onnue, et pour haque élément deI
, on suppose onnue sa probabilité d'appartenir àI
′
.
Evidemment toutproblèmedu premierensemblepeutêtre onsidéré ommeplus
sim-ple que saversion statique, tandis que tout problème du se ond ensemblepeut être
on-sidéré omme plus omplexes.
Dans e travail de thèse, nous nous proposons de ara tériser plus nement l'impa t
de es surplus et aren es d'informationen termesde omplexitéetd'approximabilité.
Parmitouteslesmanièrespossiblesdemodéliserle ara tèredynamiquedel'environnement,
nous avons retenu la réoptimisationet l'optimisationprobabiliste. Leur parfaite
omplé-mentarité permet d'envisager et de omprendre une grande partie des enjeux liés au
ara tère dynamiques des problèmes posés : la première suppose réglée la question de
la solution de base (on la onsidère optimale sur l'instan e de départ), pour mieux
ap-profondir la question de la stratégie à adopter pour générer une nouvelle solution ; à
l'inverse, la se onde xe la stratégie à adopter (qui fait alors partie intégrante du
prob-lème à traiter),et, ompte tenu de ette stratégie, her he lasolution de base qui sera la
mieux adaptée en moyenne aux nouvelles instan es potentielles.
probléma-tiques, qui nous a permis de mieux erner les enjeux liés à ha une d'elles, ainsi que les
te hniques mises en oeuvres. Puis nous avons nous-mêmesproduit un ertain nombre de
résultats nouveaux en analysant diérents problèmes d'optimisation en environnements
dynamiques.
Le do uments sera don organisé en inq hapitres, le hapitre 1 présente un état de l'art sur les environnements dynamiques, prin ipalement axé sur l'optimisation fully
dynami , la réoptimisation et l'optimisation probabiliste. Les deux hapitres suivants
présentent les résultats obtenus en réoptimisation : le hapitre 2 traite de l'arbre ou-vrant de poids minimum,et le hapitre3 de quatres problèmes héréditairesparti uliers : Max
k
- olorable Subgraph,MaxP
k
-free Subgraph,Max Split Subgraph et Max Planar Subgraph. Les deux derniers hapitres présentent les résultats obtenusen optimisation probabiliste: le hapitre 4 traite de l'arbre ouvrant de poids minimum probabiliste,etle hapitre5del'ensembledominantminimumprobabiliste. Nousnirons parquelques on lusionset omparaisonsdes adresthéoriquesetdesrésultats
orrespon-dants, et enn par lesperspe tives que nous entendons explorer dans la ontinuité de e
travailde thèse.
Deuxannexesterminent edo umentdethèse,lapremièreprésenteun ertainnombre
de dénitions générales en théorie de la omplexité, et en théorie des graphes (Annexe
A),etlase onde lesdénitions formellesdes diérentsproblèmes d'optimisationévoqués dans lathèse (Annexe B).
Dynami environments: State of the
Art
Résumé
Dans e hapitre,nousproposonsunerevuedelittératuredesdiérentsmodèlesd'optimisation
permettantlapriseen omptedu ara tèredynamiquedes problèmesàtraiter. Les
mod-èles lassiquesd'optimisations'ins riventengénéraldansdes adresstatiquesoùtoutesles
données des problèmessontparfaitement onnues etnesontpas amenéesàêtre modiées
dans le temps, si bien qu'une solution peut être onsidérée omme dénitive. Beau oup
de situation réelles oùl'informationest souvent imparfaite et/ou hangeante ne peuvent
être modélisés dans de tels adres.
Pour pallier ette insusan e liée aux modèles statiques, d'autres types de modèles
ont émergé,où le ara tère dynamique de l'informationdevientun élément déterminant.
La revue de littérature se fo alise en parti ulier sur deux modèles entraux dans e
travailde thèse : d'unepart laréoptimisation(se tion1.2),etd'autrepartl'optimisation probabiliste (se tion 1.3). Ces deux se tions sont onstruites omme deux états de l'art indépendants, et la on lusion (se tion 1.4) apporte quelques pistes de réexions sur les liens possiblesentre es deux modèles,ainsi que sur leur omplémentarité.
La se tion 1.2 ommen e par un historique des modèles dynamiques, et présente en détaillepremiertypedemodèleétudiéàpartirdeannées70: l'optimisationfullydynami ,
où des instan es de problèmes, souvent polynomiaux, sont modiés de manière itérative
(sous-se tion 1.2.2). Dans e adre, l'obje tif devient de déterminer des algorithmes et stru tures de données qui permettent de maintenir une solution optimale après haque
itération,en un temps en moyenne plus faibleque lesalgorithmes statiques.
Mêmesiquelques problèmes NP- ompletsfurentanalysés dans e adre (l'obje tifest
alors de maintenir à haque étape un ertain rapport d'approximation), le modèle fully
dynami resteparti ulièrementadaptéauxproblèmespolynomiaux,etpeuauxproblèmes
NP- omplets, si bien qu'un nouveau modèle, plus restri tif, fut introduit au début des
Là où le modèle fully dynami onsidère un nombre in onnu de modi ations
su - essives, la réoptimisation onsidère une modi ation unique, lo ale, et partant d'une
instan e pour laquelle une solution optimaleest onnue. Quoique plus restri tive en
ter-mes de nombre de modi ations, laréoptimisationpermet d'introduire des modi ations
d'instan es impossibles dans le adre fully dynami : e dernier ne onsidère que des
perturbations relatives à l'ensemble d'arêtes (ave , don , un ensemble de sommets xe),
tandis quela réoptimisationpermet d'envisager tout type de modi ations (arêteset/ou
sommets).
Nousprésentonsd'aborddesrésultatsgénérauxsurla lassedesproblèmeshéréditaires
(sous-se tion 1.2.3), dont la stru ture est parti ulièrement propi e à l'approximation en réoptimisation. Puis, nous présentons un aperçu des prin ipaux résultats obtenus sur
diérents problèmes NP- omplets (sous-se tion 1.2.4), en présentant à part les résultats relatifsàdesproblèmesdetransports,parti ulièrementétudiésdans e adre(sous-se tion
1.2.5).
La se tion 1.3 est onsa rée à la présentation de l'optimisation ombinatoire proba-biliste. Pluttqu'unerevuedelittératureprésentantlesprin ipauxrésultatsobtenusdans
e adre, ette se tion tente d'apporter une ompréhension laire des appli ations et
en-jeux liésaumodèle, en illustrantlesexpli ationspar des exemplesissus de lalittérature.
L'optimisation probabiliste permet de générer des solutions réalisables très rapidement
sur des instan es
I
′
resultantsd'un pro essus probabiliste : une instan e de départ
I
est asso iée à un ve teur de réels entre 0 et 1 sur ses éléments, qui mesurent la probabilitépour es éléments de faire partie de l'instan e
I
′
qui devra ee tivement être optimisée.
Entout état de ause, pour lesproblèmes modélisés par des graphes oùle graphede
dé-part est noté
G(V, E)
, leproblème devra don êtrerésolu rapidement sur un sous-grapheG[V
′
]
induit par un sous-ensemble in onnu de sommets
V
′
, et l'appartenan e de haque
sommet
v
i
ausous ensembleV
′
est soumis à un épreuve de Bernoulli de probabilité
p
i
. Comme expliqué en sous-se tion 1.3.2, e type de modèle permet de représenter une grande variété de problèmes réels, où les probabilités jouent un rle essentiel. Lasous-se tion1.3.3présentelesdiérentesmanièresdontdetellessituationspeuventêtretraitées sur leplan algorithmique,en mettant l'a entsur le adrethéoriqueretenudans le adre
de ette thèse : l'optimisation a priori. Dans e adre, on suppose xée une ertaine
stratégied'adaptationquipermetd'adapterunesolutionréalisablesurl'instan ededépart
à n'importe quelle sous-instan e potentielle. De ette manière, une espéran e peut être
asso iée a toute solution réalisable sur l'instan e de départ (qui onsiste en la somme
des poids des solutions adaptées à haque sous instan e potentielle, pondérée par les
probabilitésdesinstan esenquestion),appelléealorssolutionapriori,etl'obje tifdevient
de déterminer une solution qui optimise ette espéran e. La sous-se tion 1.3.4 explique en détaille prin ipede l'optimisationapriori, etpropose une dénition formellepourles
problèmes d'optimisationa priori.
Enn,lasous-se tion1.3.5présentelesdiérentsenjeuxliésàl'analysed'unproblème d'optimisationprobabiliste. En parti ulier,il est intéressant de noter que l'appartenan e
de es problèmes à la lasse NPO n'est pas triviale, y ompris quand le problème
de la fon tion obje tif (i.e. de l'espéran e), onsiste en une somme pondérée sur toutes
les sous instan es potentielles, 'est à dire
2
n
termes dans le as général. En première
analyse, la fon tion obje tif n'est don pas dire tement al ulable en temps polynomial,
et l'appartenan e du problème à la lasse NPO n'est pas triviale.
Cependant, une analyse plus poussée de la stratégie d'adaptation peut onduire à
une reformulation ompa te (polynomiale) de l'espéran e. Cette reformulation est un
élément lé dans l'analyse d'un problème probabiliste, puisqu'elle onditionne la plupart
des analyses qui peuvent être menéessur le problème en question, que e soit en termes
de omplexité, d'approximation,ou de ara térisationdes solutions optimales.
La on lusionmeten perspe tivelesdeux problématiquesprésentées dansle hapitre,
en proposant des pistes de réexions sur la mise à prot des résultatsobtenus en
1.1 Introdu tion
It is ommonly known that many optimization problems are intra table, sothat no
e- ient (i.e. polynomial)optimalalgorithmsare known for them. Although ithas notbeen
learly proved that su h algorithms do not exist, it is ommonly believed and a epted
that itis the ase, and the whole omplexity theory relies today onthis assumption.
The mainapproa h tota klethese NP-hardproblems onsists offormulating
approx-imation algorithms whi h run in polynomial time, and whose quality (measured as the
so- alled approximation ratio, i.e. the ratio between the approximate and the optimal
solutions), issomehowgaranteed. This approa h has been a tively appliedand analyzed
forthe pasttwenty years, leadingtothe developmentofmany approximationalgorithms,
aswell as awhole lassi ationof problems basedon their approximability.
At the same time, some new paradigms were investigated, mainlyfor prati tal
appli- ations, where problems are often neither stati nor perfe tly known in advan e. These
un ertainordynami aspe ts ofthe problems thatneed tobesolved an be modeledand
dealt with in various ways. This hapter fo uses on two dierent and omplementary
kinds of dynami models : Se tion 1.2 is fo used on reoptimization (and its generaliza-tion : dynami optimization), while Se tion 1.3 deals with probabilisti ombinatorial optimization.
1.2 Dynami optimization and reoptimization
1.2.1 Introdu tion
We present in this se tion an overview of the results a hieved in the reoptimization
paradigm, as well as main te hniques used in reoptimization algorithms. In the early
1980'snew omputationalframeworkswereappliedtosome polynomialproblemssu has
MinSpanningTree[70℄orShortest Path[67, 139℄,where thegoal wastomaintain
an optimal solution under some more or less slight instan e perturbations su h as edge
insertions, deletions or weight modi ations. These innovative approa hes were soon
fol-lowed by many works improving on their results, mainlyfo used on the data stru tures
allowing omplexityimprovementsinmaintainingoptimalsolutions. Mostoftheseresults
were dealing with polynomial problems, and aimed at maintaining optimal solutions in
fullydynami situations. More pre isely, the paradigmwasa littledierentfrom whatis
known today as reoptimization, onsideringthat the instan es ould besubje t tomany
onse utive or simultaneous perturbations, and the data stru tures aimed at omputing
new optima on perturbed instan es as fast as possible, in parti uler, faster than an ex
nihilo omputation. Rather than reoptimization algorithms, the approa h aimed at
dening whatwas alled fully dynami algorithms. Infa t,the results inthiseld
on- erneddatastru turesratherthan algorithms. Later on,some NP-hardproblemssu has
Bin Pa kingwere alsoanalyzed ina fulldynami setting,where the goalistomaintain
the bound onthe approximation ratio.
It is onlyin the early 2000's that the on ept of reoptimization as it is known today
emerges[3℄. Thisnewparadigm onsidersatwo-steps(initialandperturbed)environment
espe iallywhentheproblemtosolve annotbe ompletelyknowninadvan e,andthetime
window between the moment when the full problem is revealed and the moment where
the solution is needed is so short that an optimalstati omputation is impossible. In
this ase,whi ho ursveryofteninpra ti alappli ations,itseemsreasonnabletodevote
a tremendous time to ompute an optimal solution on the part of the instan e whi h is
knownforsure,andthenqui klyadaptthissolutiontothefullinstan eon eitisrevealed.
Reoptimizationalgorithms onsist of these qui k adaptation methods.
In this new framework, many new problems and types of perturbations an be
ana-lyzedandsolved. Inparti ular,theappli ationofthereoptimizationparadigmtoNP-hard
problems hasbeen intensivelystudiedinthe pastfewyears. Thesestudiesinvolve
vertex-setperturbationalso,whereas beforetheintrodu tionofthisparadigm,onlyedge-set
per-turbation wherehandled. In many ases, maintainingan optimal solutionin polynomial
time proves tobe impossible unless P=NP. Indeed,a polynomial optimalreoptimization
strategy A ould end up solving any instan e
I
in polynomial time, by starting with a trivialinstan e (for example anemptygraph), and buildingin rementallythe instan eI
with a polynomial number of steps. Applying the algorithm A at ea h step, one omesup with an optimal solution on
I
omputed in polynomial, whi h is impossible for NP-hard problems... On the other hand, itis straightforward toarm that areoptimizationproblem shouldbe at least aseasy as itsstati version, sin e one an always use a stati
algorithm to solve the reoptimization problem from s rat h, as if no informationon the
initialoptimum were available.
Finally, it is quite lear that, regarding the lassi al P vs. NP di hotomy, a given
reoptimization problem belongs to the same omplexity lass as its stati version. But
this omplexity heredity between a stati optimization problem and its reoptimization
version goes no further. In parti ular, one annot extend this result to approximability
omplexity lassesinNP(forexampleaproblemhardtoapproximate initsstati version
an be APX in the reoptimization setting), so that the resear h in this eld has been
mainly-and su essfully- fo used onthe denition of approximation algorithms. Indeed,
reoptimizing NP-hard problems an lead to two kindsof advan es:
•
either the reoptimizationalgorithm a hieves the same solution quality (optimality or approximation ratio) as the best stati algorithm known, but with a betterrunningtime,
•
orthereoptimizationalgorithma hievesinpolynomialtimeabetterapproximation than the best approximation ratio known forthe stati version.In pra ti e, most results a hieved in the reoptimizationsetting onsist of approximation
algorithmwhi hgarantee betterapproximationratiosthan the stati algorithms,oreven
approximationratios that annot be garanteed inthe stati ase unless P=NP.
Themostspe ta ularexampleisprobablythe lassi alMaxIndependentSet
prob-lem, for whi h it has been proved that noalgorithm an a hieve an approximation ratio
better than
n
ǫ−1
(i.e. asymptoti ally returning a trivial solution onsisting of 1 vertex)
unlessP=NP.Inthe reoptimizationsetting( onsideringnodeinsertionsordeletions),the
Thissettinghasalsobeensu essfullyappliedtovarious lassi alNP-hardproblemssu h
as several S heduling problems [14, 15, 141℄, Steiner Tree [34, 42, 43, 65℄ or the
Travel-ing Salesman Problem [3, 8, 40, 45℄. In this se tion we will dis uss some general issues
and properties on erning reoptimizationof NP-hard optimization problems and we will
review some of the most interesting appli ations.
The se tion is organized as follows: subse tion 1.2.2 is devoted to des ribing fully dynami algorithmsand orresponding data stru tures, major results are presented, for
bothPandNP-hardproblems. Subse tion1.2.3dealswithgeneraldenitionsand proper-ties of reoptimization. In subse tion 1.2.4,the omputationale ien y of reoptimization applied to various NP-hard problems is dis ussed, and the latest results of this ongoing
resear h eld are presented, whereas subse tion 1.2.5 is spe i ally devoted to vehi le-routingproblems.
1.2.2 Fully dynami algorithms
As explainedin subse tion 1.2.1, the rst problems studied ina dynami setting- where instan es of optimization problems are onsidered to be modied lo ally, and solutions
are adapted given these input modi ations -, were rather easy (i.e. polynomial)
prob-lems, forwhi hthis additionalfeature an be onsideredas anopportunitytoredu e the
omptutional ost. As stated before, a stati algorithm an always be used to solve a
reoptimizationproblem,sothat the reoptimizationversion of a given optimization
prob-lem is always at least as easy as its stati version. In parti ular, if a stati problem
Π
ispolynomial, itsreoptimization(andfully dynami )versionR(Π)
isalsopolynomial,so that the only kindof result one an expe t is a omputational ost redu tion.Later on, the on epts of fully- and semidynami graphs and algorithms were also
appliedtoNP-hard problems, wherethe goalis tomaintaina bounded ompetitiveratio
(the ratio between the optimal solution, and the solution omputed iteratively, along
withea h graphupdate)regardless of the natureornumberof graph updates that might
o ur. Inthis ase,the dynami featureofgraphsis learly ageneralizationoftheon-line
model, and is onsidered as a hallenge rather than as an opportunity, in the sense that
designingdynami algorithmswhi h anmaintaingoodsolutionsunderanykindofgraph
perturbations isquite ahard task, somehow harderthan omputing goodsolutionsfrom
s rat h.
Indeed,the maindieren ebetween polynomial,andNP-hardproblemsregarding the
fullydynami setting,isthat,forthepolynomial ase,optimalityismaintainedafterea h
update, and the optimality ata given stage helps omputing an optimalsolution for the
next stage in a more e ient way than omputing this solution from s rat h. In the
opposite,when one has tohandle aNP-hard probleminafully dynami setting,one an
onlyget anapproximatesolutionatalltime,and itishardtoensurethatthe ompetitive
ratio remainsbounded fromone stage to the other.
Toour knowledge, onlythreeNP-hard problems were analyzed infullydynami
environ-ment: Bin Pa king[96℄,Min Vertex Cover[95℄, and Shortest Path in Planar
Denitions
Analgorithmforagivengraphproblemissaidtobedynami ifit anmaintainasolution
fortheproblemasthegraphundergoesmodi ations. Regardingthepolynomialproblems
analyzed inthis setting,the modi ations onsidered are insertions ordeletionsof edges,
or hanges intheweightof someedges (ifrelevant). Atea hstep,a moreorless omplex
data stru ture is updated, and this data stru ture an be used at any time as input to
ompute a solution in a more e ient way than onsidering the raw instan e as input.
The data stru ture is like a pre-pro essed version of the instan e, whi h one an use to
ompute optimalsolutionsfaster. Of ourse,thepro ess isofinterestonlywhenthedata
stru ture iseasier toupdate (i.e. requiresa smallernumberof operationstobe updated)
than the optimum itself.
Update and query pro edures
In fully dynami settings, an update operation denotes a hange in the data stru ture
in response to an in remental hange to the input, and a query is a request for some
information about the urrent solution, using the underlying data stru ture. These two
notions are fundamental in the analysis of dynami algorithms, sin e the omplexity
improvement relies ompletelyonthem.
Indeed, dynami algorithms onsist of maintaining a spe i data stru ture at ea h
step, rather than omputing an optimal solution at ea h step. Whenever one wishes to
have an optimal solutions, one an run the query pro edure. Very often, a tradeo has
to befound between a stru ture easilyupdated, and one easilyqueried. This is pre isely
why omplexitiesof both update and query operations are always analyzed.
Consider, for example, that one wants to maintain a minimum spanning tree in a
dynami graphusingthe lassi alKruskal'salgorithm[114℄. To ompute itfroms rat h,
one needs to:
•
sort all edges inin reasing weight order,whi h takesO(m log(m))
operations,•
then, starting from anempty set, insert greedily minimum weight edges in the so-lution, provided they do not reate loops with the urrent solution, whi h, usingTarjan's method,takesanother
O((m + n)α(m, n))
operations,whereα
isthe fun -tional inverse of A kermann's fun tion.Now, onsider the same problem in a fully dynami setting, where the set of nodes
is xed, but where edges an be inserted, deleted, or modied at ea h step. To solve
this problem in a more e ient way than re omputing an optimum ex nihilo after ea h
graph modi ation, it su es to maintain the sorted list from one stage to the other.
Unlike building up the sorted list from s rat h (whi h takes
O(m log(m))
operations), maintaining it at ea h stage takes onlyO(log(m))
operations for ea h edge insertion, deletion or modi ation: when an edge is deleted from the list, the list remains sorted;and when an edge is inserted or (resp. modied), one only needs to insert it in (resp.
Wheneveronewantsasolution,itonlytakes
O((m+n)α(m, n))
operationsto ompute itout ofthesortedlist(whi hisstri tlybetterthanO(m log(m))
,thestati omplexity of Kruskal's algorithm).Usingthisverysimpleexample,one understandshowmaintaininganunderlyingdata
stru ture through very e ient update operations (in this ase, maintaining the sorted
list),enablesto omputesolutionsatany stagewithaquery operation,witha omplexity
better than a stati algorithm.
Amortized omplexity analysis
We now formally dene the problems onsidered. We are onsidering a fully dynami
graph
G
over a xed vertex setV
,|V | = n
.m
generally denotes the urrent number of edges, whi h is often assumed to be0
orn(n
− 1)/2
(soan independent set, or a lique) atthe beginningof the algorithm.Mostofthetimeboundspresentedareamortized,meaningthattheyareaveragedover
all operations performed. This ompexity measure is parti ulary adapted to analyzing
onlineorfully-dynami algorithms, sin eit enablesto at h somefeatures that the worst
ase analysis isunableto apprehend. Namely,ina dynami setting, the algorithmmight
be designed to avoid en ountering the worst ase frequently, so that anaverage measure
on all operations allows to dilute the o asional worst ase omplexity in all other
-easilyupdated- ases. Infa t,theanalysisamountstosaving andspending time,justlike
money ona bank a ount. The amortized omplexityis given by the minimalnumberof
operationsone needsatea hstage, operationswhi hwillbeeitherused orsaved. Saving
time whenthe simple ases o ursenables toredu e the number of operationsneeded to
handle hard ases, sin e some operations of the hard ases willbe somehow paid with
allthe operations saved previously.
To givea leareridea of howthese so- alled amortized omplexity analyses are built,
we will present a lassi al example: dynami ally resizable arrays. The pro ess onsists
of, starting from an array of size 1, to add
n
elementsin this array, by doubling its size wheneveritbe omesfull. Theworst ase omplexityo urswhenthearraysizeisdoubledbe ause it takes
O(n)
operations to do this, and onsidering thatn
elements are added, the worst ase omplexity for the overall pro ess isO(n
2
)
.
Here onsider that one saves two operations when the simple ase o urs, namely
addinganelementwhenthe arrayisnotfull. Basi ally,this asetakesonlyoneoperation,
plus two operations that are saved for later, so in all: three operations. Also onsider
that one spends all saved operations when the omplex ase o urs, namely adding an
elementwhenthearrayisfull. Basi allythis ase takes
m + 1
operations onsideringthatm
is thesize of thefull array that hastobedoubled,plus the twosaved operations,so in all:m + 3
operations. Whenever the omplex ase o urs,m
operationshavebeen saved (m/2
new elements have been added sin e the last resize), if one spends them all, one onlyneeds3
additionnaloperations tosolvethe hard ase.Thus,theamortized omplexityis
3
,soO(1)
,farbetterthantheworst ase omplexityO(n)
. Finally,the amortizedanalysis enables ustoassert that the overall omplexity for insertingn
elementsin adynami array isO(n)
and notO(n
2
)
A restri tion of the fully dynami setting : online optimization
It would be hard not to mention online optimization while surveying dierent models
of dynami environements in ombinatorialoptimization. However, onsidering the huge
ow of literature ta kling optimization problems in this framework, we will only say a
few words about it, and refer the interested reader to full surveys, su h as [2℄, or [143℄,
fo used onS hedulingproblems.
Onlinemodelsandalgorithmshavemu hin ommonwithfullydynami ones. Indeed,
in the online framework, it is supposed that an instan e
I
of a problemΠ
is revealed gradually,andanonlinealgorithmOAisasked tomaintainafeasible solutionatalltimes.A naturalrequirementforOA isthat iteventuallyprodu es asolutionas lose to
OPT(I)
aspossible. Tothispoint,anonlineproblemisthesameasafullydynami one,res ri tedto ases whereno data is allowed tobedeleted.
But the onlinemodelisa tually mu hmore restri tive. The main di ultyregarding
online problems, is that at ea h step, one must de ide if the new data will be added in
the solution ornot. Ifitis not, then itis irrevo ablylost, and annotbetaken ba k ata
later stage of the online pro ess. Thus, it is generallyimpossible to garantee optimality,
and most studies fo us on ompetitiveanalysis.
Competitiveanalysis is amethoddesigned espe iallyfor analyzingonline algorithms,
in whi h the performan e of an online algorithm is ompared to the performan e of an
optimal oine algorithm. An algorithmis said to be ompetitiveif its ompetitiveratio
- the ratio between the value of the solution it eventually returns and the value of an
optimum returned by anoinealgorithm -is bounded.
In ompetitive analysis, one imagines that the instan e is built and revealed by an
adversary that deliberately hooses di ult data, to maximize the ompetitive ratio.
Classi aly,the adversary willtry toprevent the onlinealgorithmfromin ludingelements
of the optimum, by designing aspe i instan e and data-sequen e.
Online models help representing and optimizing numerous pra ti al problems, and
quite naturally, many NP-hard problems were ta kled in this setting. A parti ular lass
of problems whi h online versions were intensively dis ussed are S heduling problems
[1, 62, 132℄, but other problems su h as Min Coloring [57, 82℄, Max Independent
Set [81℄, TSP [38, 102℄, or Min Vertex Cover [60℄ were also dis ussed in online
settings, the listis not exhaustive.
Polynomial problems in dynami graphs
Sin e the rst paper by Frederi kson [70℄ presenting a data stru ture known as topology
trees handlingthefullydynami versionsofConne tivityandof MaximumSpanning
Forest problem in
O(m)
operations per update, many improvements have been made on this result[64, 89, 92℄, and many otherpolynomialproblems were ta kled inthe fullydynami setting,su hasShortest Path[61,75,109℄,Min Spanning Tree[92,145℄,
2-Edge Conne tivity [64, 71, 90, 92℄,and Bi onne tivity [87, 88,91, 92℄.
Instead of detailingall these results, we willfo us ontwo major problems,
Conne tiv-ity and Shortest Path.
Conne tivity problem Given a graph
G(V, E)
the Conne tivity problem on-sists of de iding whether the graph is onne ted or not. Considering the impli ations ithasonallrelatedproblems(whileupdatingdatastru turesofvariousfullydynami
prob-lems, one oftenhas toknowwhether the stru tureis onne ted), it isnot surprising that
the Conne tivity problem re eived a spe i attention in the fully dynami setting.
Manysu essiveimprovementswereprovided sin ethe rstresultsbyFrederi ksoninthe
early80's. We willsket h out the te hnique used by Holm etal. [92℄, providingthe best
omplexity known for it,whi h alsoenabledthe authorsto improve omputational osts
for three other onne tivity-related problems.
Theorem 1. ([92℄) Given a graph
G
withm
edges andn
verti es, there exists a deter-ministi fully dynami algorithm that answers onne tivity queries inO(log n/ log log n)
time worst ase, and usesO(log
2
n)
amortized timeper insert or delete.Leavingaside theworst ase timefor onne tivity queries,whi hrequiresavery
te h-ni alproof,wewillfo usonthe
O(log
2
n)
amortizedtime,enhan ingtheprevious
n
1
3
log n
byHenzingerandKing[89℄. Theideaisthefollowing: usingFrederi kson'ste hnique[70℄,
oneonlyhas to(re) omputeaspanningforestatea hstep,whi hwilldrasti allyde rease
the omputational ost of onne tivity queries. Keeping a spanning forest updated on a
dynami graph an be ompli ated only when an edge removal disonne ts a tree of the
spanning forest:
•
Whenanedge(v, w)
isinserted,eitherv
andw
are onne tedinthe urrentspanning forestF
(one an he k this with low omputational ost) and(v, w)
is not added tothe spanningforest, orv
andw
are not onne tedinF
,and one must add(v, w)
toF
, tokeep it maximal.•
When an edge(v, w)
is deleted, it might dis onne t the graph only if(v, w)
∈ F
. In this ase, one has to repla eit with another edge, if any. In terms of worst aseomplexity,this might be very ostly, sin e one has to he k everypossibleedge to
be sure that there isno su h re onne ting edge.
But re all that the measure we are interested inis amortized time. Holm et al. provide
a method to de rease it down to
O(log
2
n)
. Basi ally, the te hnique used to de rease
the omplexity is to assign a level
l(e) 6 l
max
=
⌊log
2
n
⌋
to ea h edgee
of the graph. For ea hi
,F
i
denotes the subforest ofF
indu ed by edges of level at leasti
. Thus,F
⊆ F
0
⊆ F
1
⊆ ... ⊆ F
l
max
.At the beginningof the algorithm,every edge is assignedthe level
0
, and so is every new inserted edge. Edge levels in rease by one whenever they belong in a tree that isdisonne tedduetoanedgedeletion,orea htimetheyarebeing andidateasre onne ting
edges. Withoutgettingtoomu hintodetailregardinghowthealgorithmmaintainsthese
propertiestrue atalltime,the mainpropertiesthat boundtheamortized ostperupdate
• F
isamaximal(withrespe ttol
)spanningforest ofG
, thatis,if(v, w)
isanontree edge,v
andw
are onne ted inF
l(v,w)
.•
The maximal number of verti es in atree inF
i
is⌊n/2
i
⌋
.
The rst property bounds the number of edges tobe onsidered as andidates for
re on-ne ting whenever anedge
e
∈ F
isdeleted, while the se ondensures thatl
max
=
⌊log
2
n
⌋
as stated earlier.
The authors derive also results for Maximum Spanning Forest (result derived
dire tly), Min Spanning Tree (derived from an appli ation of the method, but in
a graph that is onne ted from the beginning, instead of an empty graph, and taking
weights ina ount), 2-Edge Conne tivity and Bi onne tivity.
All Pairs ShortestPathProblem Ina onne tedgraph
G(V, E)
theAllPairs ShortestPathproblem onsistsofndingshortestspathsbetweenallpairofnodesofG
, thusn(n
−1)/2
shortestpaths. Thisproblemwasoneoftheveryrsttobeanalyzedinthe dynami setting[115,125,138℄. Although the literature dealingwith this topi has beenrather ri h,it isonly inthe early90's that the rst provably e ient dynami algorithm
was designed. Ausiello et al. [9℄ proposed a de rease-only shortest path algorithm for
dire ted graphs havingpositive integer weights less than
C
: the amortized runningtime oftheir algorithmisO(Cn log n)
peredgeinsertion(orweightde rease,whi hgeneralizes insertion).Asanexample,wesket htheideasofthebestfullydynami algorithmfortheproblem
to our knowledge [61℄. The authors enhan e the previous best known results for both
in rease-only and fully dynami ases.
Theorem 2. ([61℄)Inanin rease-onlysequen eof
Ω(m/n)
operations,ifshortest paths are unique and edge weights are non-negative, there exists a fully dynami algorithmthatanswers ea h update operation in
O(n
2
log n)
amortized time, andea hdistan e and path
query in optimal time. The spa e used is
O(mn)
.The main idea enabling this omplexity is the use of a spe i notion, alled lo ally
Shortest Path, whi h apllies to all paths
Π
xy
, for whi h all proper subpaths are Short-est paths. LetSP
andLSP
be the sets of shortest paths, and lo ally shortest paths respe tively. On an dire tly arm thatSP
⊆ LSP
, and it is easy to verify if a pathΠ
xy
∈ LSP
or not. A given path islo allyshortest if an onlyif:•
it isa single edge, or•
both paths fromthe rst vertex tothe lastbut one vertex, and from the se ond to the last are shortest paths.The authors prove that the update of both sets
SP
andLSP
an be done inO(n
2
log n)
amortizedtime, provingthat nomorethan
n
2
paths anstopbeinglo allyshortestpaths
ompute the modied set
SP
after the update in a e ient way. Moreover, it is proved thatthere annotexistmorethanmn
lo allyshortest pathsinagiven graph,fromwhi h one derives the spa e bound immediately.The method and underlying data stru tures are then adapted to the fully dynami
setting(handling both in reases and de reases):
Theorem 3. ([61℄) Inafullydynami sequen eof
Ω(m/n)
operations,ifshortest paths are unique andedge weights are non-negative,there exists a fully dynami algorithm thatanswersea hupdateoperationin
O(n
2
log
3
n)
amortizedtime,and ea hdistan eandpath
query in optimal time. The spa e used is
O(mn log n)
.The te hnique used is basi ally the same, though the notion of lo ally shortest path is
extended to the notion of lo ally histori al path, i.e., paths whose proper subpaths are
histori al,meaning that they have been shortest paths at least on e during the pro ess.
Of ourse,the number of lo ally histori alpaths being larger than the numberof lo ally
shortest paths, the amortized time per update goesup to
O(n
2
log
3
n)
.These two examples show how proper data stru tures an improve amortized time
om-plexity when dealing with polynomial problems in a fully dynami setting. A whole
dierent pi ture appears when applying this setting toNP-hard problems.
NP-hard problems in dynami graphs
Re all that, in the fully dynami setting, the pro ess always starts with trivial instan es
(for example, graph problems would start with independant sets, or liques), whi h are
modied lo ally atea h step, and the goalis to maintain a given data stru ture atea h
step. This data stru ture helps omputing good solutions in a more e ient way than
when dealingwith a raw instan e.
In the previoussubse tion ,we have presented the kindof datastru tures and results
thatwereproposedintheliteraturewhenapplyingthefullydynami settingtopolynomial
problems. Inthat ase, the data stru tures helped omputing optimalsolutionin amore
e ient way than omputingnew optimafrom s rat h after ea h lo almodi ation. On
the opposite, when applying this setting to NP-hard problems, the data stru tures will
help maintainingapproximate solutions.
Moreover, whenapplying this settingto NP-hardproblems, one annotexpe t better
approximation ratios than inthe stati ase. Indeed, any instan e of a given problem
Π
anbebuiltoutofatrivialinstan emodiedlo allyapolynomialnumberoftimes. Thus,agiven datastru ture maintaininganapproximationratio
ρ
afterea hlo almodi ation an also be seen as a stati algorithm providingρ
-approximate solutions: starting from a trivial instan e, modifying this instan e a polynomial number of times, and updatingthe data stru ture after ea h modi ation, one an get a
ρ
-approximate solution on any instan e in polynomialtime.Thus, in what follows, we will present dynami algorithms whi h aim maintaing
ap-proximate solutions, more e iently than stati approximation algorithms, but not
Only three NP-hard problems were proposed approximation algorithms in this
set-ting: Bin Pa king [96℄, Min Vertex Cover [95℄, and Shortest Path in Planar
Graphs [110℄ whi h are allratherwellapproximatedproblems. In the following, wewill
provide some detailsfor the two rst problems.
Bin Pa king Problem One of the most elementary and easy to approximate
prob-lem, the Bin Pa king problem onsists, given a list of items
L = a
1
, a
2
...a
n
of sizes(a
i
)
∈ (0, 1]
,of nding the minimumk
so that allthe items t intok
unit-sizebins. Thisproblemisknowntobehardtoapproximatewithin3/2
−ǫ
. Ifsu happroximation existed, one ouldpartitionn
non-negativenumbers intotwosets ofminimalsize in poly-nomial time, and this problem is also known to be NP-hard. However, one understandsthat this inapproximability o urs only in instan es with rather small optima, so that
approximation algorithmproviding better and better asymptoti al approximation ratios
were proposed for this problem. The problem was shown to be asymptoti allyin PTAS
[106℄, and the best pra ti al algorithm provides an asymptoti al approximation ratio of
71/60
inO(n log n)
[105℄, generating a solution that uses no more than71/60OPT + 1
bins, whereOPT
is the optimalnumberof bins.ThemainproblemwhendealingwithBin Pa kinginafullydynami setting,isthat
lassi al oine methods do not resist insertion of new items, sin e these methods rely
on sorting all the items in de reasing size order, and then using the sorted list to build
the pa king. For instan e, pla ing items in the rst bin where they t provides a
11/9
asymptoti al approximationratio using asorted item list. It is obviousthat this kindofmethodis not adapted to the dynami setting.
Using an adaptation of Johnson's approximation algorithm [103, 104℄, Ivkovi¢ and
Lloyd [96℄ provide a fully dynami algorithm a hieving a ompetitive ratio almost as
goodas the best stati approximation algorithm:
Theorem 4. ([96℄) The fully dynami bin pa king algorithm MMP (Mostly Myopi
Pa king) is asymptoti ally
5/4
- ompetitive and requiresO(log n)
per Insert/Delete oper-ation.The idea of Johnsons's algorithm is to regroup items in ve size groups: Big, Large,
Small, Tiny and Minus ule. Big ontains items having size in
]1/2, 1]
and Minus ule items having size in]0, 1/5]
(other thresholds being logi ally1/3
and1/4
).Intuitively,this stru ture is mu hmore adapted tothe dynami settingthan a sorted
list on all items. Ivkovi¢ and Lloyd's algorithm a hieves its ompetitive ratio by using
the te hnique whereby the pa king of an item is done with a total disregard for already
pa ked items of a smallersize (i.e. belongingin smaller-sizegroups).
Ea h of these myopi pa kings may then ause several smaller items to be repa ked
(in a similar fashion). Indeed, an item might be pa ked in a bin where it does not t,
due to the presen e of smaller size items in the bin, whi h need to be repa ked in other
bins. With some additional sophisti ation to avoid ertain bad ases where most of
the smaller size items need to be repa ked in as ading sequen e, the number of items
to be repa ked is bounded by a onstant, thus bounding both the ompetitive ratio and
omplexity.
Min Vertex Cover Problem Avertex overofagraph
G(V, E)
isasetof verti es su h that ea h edge of the graph is in ident to at least one of them. The problem ofndingaminimumvertex overisparadigmati in omputer s ien eandis approximable
withinratio2. Indeed,Gavril[12℄proved thattakingallendpointsofamaximalmat hing
results ina 2-approximate vertex over.
This method happens to be rather dynami -friendly, but an hardly handle some
pathologi al ase in sparse graphs: while insertion of edges, and deletion of edges not
in the urrent maximal mat hing
M
an be both handledO(1)
, deletion of edges ofM
might require to s an the whole vertex set in order to re ompute a maximal mat hing,thus requiring
O(m)
operations, whi h is also the oine ase omplexity. However, this dynami method behavesrather well in dense graph, and will be referred to asClean inwhat follows.
Ivkovi¢ and Lloyd's [95℄ propose to maintain at all time an additional bit for ea h
vertex, indi atingifthevertexismat hedorunmat hedinthe urrentmaximalmat hing.
Thisenablestoredu etheworst ase omplexityofre omputingamaximalmat hingfrom
O(m)
toO(1)
, whileO(m)
operations are needed to maintain the lists of mat hed and unmat hed vertex updated. The key idea of the Dirty method is to update these listsonly on e, thus redu ing the amortized omplexity and keeping a bounded number of
ina ura ies inthe lists. This algorithmbehaves wellin sparse graphs.
But a tually, in the dynami setting, the graph might turn from sparse to dense and
vi eversa,aftera ertainamountofupdates,sothatbothClean(improving omplexityin
densegraphs)andDirty(improving omplexityinsparsegraphs)mightendup onfronted
totheirrespe tivepathologi al ases. TheoverallalgorithmofIvkovi¢andLloyd,denoted
B1, tests frequently the density of the graph (whi h is evolving along with ea h update),
and de ides to use either Clean or Dirty, depending on whether the urrent graph is
dense orsparse, inorder to ensurean overall enha ed omplexity:
Theorem 5. ([95℄) B1 is a 2- ompetitive fully dynami approximation algorithm for
vertex over (thus, it is approximation- ompetitive with the standard o-line algorithm).
Further, both Insert and Delete operations require
O((n + m)
3/4
amortized running time.
It is lear that maintaining a bounded ompetitive ratio in fully dynami setting
is quite a hard task when dealing with NP-hard problems. This di ulty arises from
needingto ompute anapproximatesolutionbasedonanotherapproximatesolution,and
thisseems tobeonlypossibleinveryspe i ases,and withwellapproximableNP-hard
problems. To ope with this resistan e to dynami approximation of some NP-hard
problems, and also to answer to numerous pra ti al appli ations, a restri tion of the
fully-dynami setting was introdu ed: reoptimization.
1.2.3 Reoptimization: General properties
As mentioned earlier, if one wishes to get an approximate solution on the perturbed
modied instan e is always possible. In other words, reoptimizing is at least as easy as
approximating,sin eatworst,theinformationonthe initialoptimum anremainunused,
whi hamountsto the stati problem. Thus, the goalof reoptimizationis todetermine if
it is possible tofruitfully use our knowledge onthe initialinstan e inorder to:
•
either a hieve better approximation ratios,•
or devise faster algorithms,•
or both.In this subse tion, we present some general denitions, and results dealing with
re-optimization properties of some NP-hard problems, many of whi h have already been
presented in [6℄. We rst give some general approximation results on the lass of
hered-itary problems, under node insertion (results regarding the deletion ase an be found
in hapter 3). Then, we dis uss the dieren es between weighted and unweighted ver-sions of lassi al problems, and nally present some ways to a hieve hardness results in
reoptimization,using the example of Min Coloring.
Before presenting properties and results regarding reoptimization problems, we will
rst giveformaldenitions of reoptimizationproblems, instan es, and approximate
reop-timizationalgorithms:
Denition 1. Anoptimizationproblem
Π
isgivenbyaquadruple(
I
Π
, Sol
Π
, m
Π
, goal(Π))
where:• I
Π
is the set of instan es ofΠ
,•
givenI
∈ I
Π
,Sol
Π
(I)
is the set of feasible solutionsofI
,•
givenI
∈ I
Π
, andS
∈ Sol
Π
(I)
,m
Π
(I, S)
denotes the value of the solutionS
of the instan e I,m
Π
is alledthe obje tivefun tion,• goal(Π) ∈ {min, max}
.Denition 2. A reoptimizationproblem
RΠ
isgiven by a pair(Π, R
RΠ
)
where:• Π
is anoptimization problemasdened inDenition 1,• R
RΠ
is a rule of modi ation on instan es ofΠ
, su h asaddition, deletionor alter-ation of a given amount of data. GivenI
∈ I
Π
andR
RΠ
,modif
RΠ
(I, R
RΠ
)
denotes the set of instan es resulting from applying modi ationR
RΠ
toI
. Noti e thatmodif
RΠ
(I, R
RΠ
)
⊂ I
Π
.Foragiven reoptimizationproblem
RΠ(Π, R
RΠ
)
,areoptimizationinstan eI
RΠ
ofRΠ
is given by a triplet(I, S, I
′
)
, where :
• I
denotes aninstan e ofΠ
, refered to asthe initial instan e,• I
′
denotes an instan e of
Π
inmodif
RΠ
(I, R
RΠ
)
.I
′
is refered to as the perturbed
instan e.
Fora given instan e
I
RΠ
(I, S, I
′
)
of
RΠ
, the set of feasible solutions isSol
Π
(I
′
)
.
Denition 3. Foragivenoptimizationproblem
RΠ(Π, R
RΠ
)
,areoptimizationalgorithm A issaid tobe aρ
-approximation reoptimizationalgorithmforRΠ
if and only if:•
A returnsa feasible solutionon allinstan esI
RΠ
(I, S, I
′
)
,
•
Areturnsaρ
-approximatesolutiononallreoptimizationinstan esI
RΠ
(I, S, I
′
)
where
S
is anoptimal solutionforI
.Note that denition 3 is the most lassi al denition found in the literature, as well as the one used inthis Chapter. However, analternate (and more restri tive) denition
exists (used for example in [34, 42, 43℄), where a
ρ
1
-approximation reoptimization algo-rithmforRΠ
issupposed toensureaρ
1
ρ
2
approximationonany reoptimizationinstan eI
RΠ
(I, S, I
′
)
whereS
is aρ
2
approximate solution inthe initialinstan eI
.Reoptimizing Hereditary problems under vertex insertion
A property
P
on a graph is hereditary if the following holds: if the graph satisesP
, thenP
is also satised by all its indu ed subgraphs. Following this denition, stability, planarity, bipartiteness are three examples of hereditary properties. On the oppositehand, onne tivity is no hereditary property sin e there might exist some subsets of
G
whose removal dis onne t the graph. Now, let us dene problems based on hereditaryproperties.
Denition 4. Let
G = (V, E, w)
be a vertex-weighted graph. We allHered the lass of problems onsisting,given agraphG = (V, E)
, ofndingasubset ofverti esS
su hthatG[S]
satisies agiven hereditary property and that maximizesw(S) =
P
v∈S
w(v)
.Forinstan e, Max Weighted Independent Set, Max Weighted Indu ed
Bi-partiteSubgraph,Max WeightedIndu edPlanarSubgrapharethree lassi al
problems inHered that orrespond tothe three hereditary properties given above.
In whatfollows,
G
willdenotethe initialgraph,OPT
aknown optimalsolutiononG
,G
′
themodiedinstan e(resultingfromalo almodi ationon
G
),andOPT
′
anoptimal
solution on
G
′
. Under vertex-insertion (where
G
is a subgraph ofG
′
), there are three
powerful propertiesthat one an use when reoptimizing problems inHered :
i- the initialoptimum
OPT
remainsa feasible solutionin the modiedgraphG
′
; this
derivesdire tly from the denition of anhereditary property.
ii- the part of the new optimum
OPT
′
indu ed by verti es of the initialgraph annot
ex eedthe initialoptimum;otherwise, thispart wouldbeabettersolutionthan the
iii- a single node always veries a hereditary property; this derives also dire tly from
the hara terization of hereditary properties interms of forbiddenminors.
Considering these threeproperties,one an formulate ageneralalgorithmto
approxi-mateanyweightedprobleminHeredwithinratio1/2. LetR1denotethegeneri algorithm
whi h onsists of returning the best solution between the initial optimum
OPT
(notedS
1
) and the single inserted nodex
(notedS
2
):Proposition 1.1. ([6℄) Under vertex insertion, R1 approximates any problem
Π
in Hered within ratio 1/2 in onstant time.Theresultisquitestraightforwardwhen onsideringthethreepropertiesstatedabove:
while properties (i) and (iii) assert that R1 returns a feasible solution, property (ii) an
be reformulatedwith the following bound:
w(S
1
) > w(OPT
′
)
− w(S
2
)
(1.1)from whi h one derives dire tly the approximation ratio.
Re all that
S
1
onsistsofasinglenode,sothatone shouldbeableto ompleteitwith some other verti es of the graph. In parti ular, one ould run an approximationalgo-rithmonthe remaininginstan e aftertaking
x
. Consider forinstan e Max Weighted Independent Set, and revisitthe proof of the previous Proposition. IfOPT
′
doesnot
ontain
x
, then the initialsolutionOPT
is optimal. If, on the opposite,OPT
′
ontains
x
, then onsider the remaininggraphafter havingremovedx
and its neighbors. Suppose thatoneusesanapproximationalgorithmwhi hgaranteesaρ
-approximationratioonthe remaininggraph, and that one addsx
to this approximate solution. Denote this generi algorithmR2. Then the so-obtained solutionS
′
2
veries:w(S
2
′
) > ρ(w(OPT
′
)
− w(x)) + w(x) = ρw(OPT
′
) + (1
− ρ)w(x)
(1.2)On the other hand,it stillholds:
w(S
1
) > w(OPT
′
)
− w(x)
(1.3)Denotingby
S
thebestsolutionamongS
1
andS
′
2
,adding(1.2) and(1.3)with oe ients1
and(1
− ρ)
, one gets:w(S) >
1
2
− ρ
w(OPT
′
)
(1.4)
whi his better than
ρ
.The problem istodene what the remaininginstan e after taking
x
is. This notion is strongly related to the nature of the hereditary property. To be more pre ise, inorder to run the method that we just des ribed, one must be able to dene a subset of
remaininginstan e after taking
x
is agraph in whi h nonode an violate the hereditary property whenput together withx
.If su h a set is easy to dene when dealing - for instan e - with independent sets, it
be omes rather vaguous when dealing with more omplex hereditary properties, su h as
planarity or bipartiteness. The question an nd elements of answer when onsidering
hereditary properties in terms of forbidden minors. These questions are dis ussed in
hapter 3 where spe i approximation algorithms, as well as inapproximability bounds are proposed for various hereditary problems.
Even if not dire tly appli able to problems in Hered (most of them have been proved
to be inapproximable unless P=NP [116℄), the te hnique is rather general, and an nd
appli ations inthe reoptimizationof many problems. We illustrate itin two well-known
problems: Max Weighted Sat (Theorem 6) and Min Vertex Cover (Theorem 7). Givena onjun tionofweighted lausesoverasetofbinaryvariables,MaxWeighted
Sat Problemasks forthe truthassignementof variables maximizingthe sum ofweights
of satised lauses.
Theorem 6. ([6℄)Under theinsertionof a lause,reoptimizing Max WeightedSatis
approximable within ratio 0.81.
Consider a onjun tion of lauses
ϕ
over a set of binary variables, ea h lause being given withaweight,andletτ
∗
beaninitialoptimumsolution. Considerthatanew lause
c = (l
1
∨ l
2
∨ . . . ∨ l
k
)
, (wherel
i
is a literal of variablex
i
, i.e., eitherx
i
orx
¯
i
) has weightw(c)
. The modied formulaisthus given byϕ
c
= ϕ
∪ c
.k
dierent solutionsτ
i
,i = 1, . . . k
are omputed. Ea hτ
i
is built as follows: - setl
i
to true;- deletefrom
ϕ
all satised lauses.- apply a
ρ
-approximation algorithmonthe remaininginstan e (note thatthe lause is already satised); together withl
i
,this isa parti ular solutionτ
i
.Finally,return the best solution amongall
τ
i
's and the initialoptimumτ
∗
.
As previously, if the optimal solution
τ
∗
c
on the modied instan e does not satisfyc
, thenτ
∗
remainsoptimal forthe new problem. Otherwise, atleast one literal in
c
,sayl
i
, istrue inτ
∗
c
. Considering thatl
i
is true inτ
i
, itis easy tosee that:w(τ
i
) > ρ(w(τ
c
∗
)
− w(c)) + w(c) = ρw(τ
c
∗
) + (1
− ρ)w(c)
(1.5) On e more, asin the generalte hnique des ribed above:w(τ
∗
) > w(τ
c
∗
)
− w(c)
(1.6) So, equation(1.4)holdsfor Max WeightedSatalso. Takingintoa ountthat this problemis approximable within ratioρ = 0.77
[5℄,the result laimed is on luded.Letus nowfo us onaminimizationproblem,namely Min Vertex Cover. Given a
vertex-weighted graph