one possible way to go, but the systematic choice of lower or upper-bounds of cuts in both sides of the constraints is somewhat debatable. In Ammar (2009), recently stud- ied a similar formulation of multiobjective linear programming problems with fuzzy random coefficients in the objective and constraints. Katagiri et al. (2004) handle fuzzy number comparisons in fuzzy random bottleneck optimisation using possibility and necessity of dominance. A similar formulation for multiobjective linear program- ming is proposed by Li et al. (2006). By nesting possibilistic programming inside chance-constrainedprogramming, they transform the fuzzy stochastic constraints into equivalent deterministic ones. Likewise, Iskander (2005) used the standard chance- constrained approach by transforming stochastic fuzzy problems in the presence of fuzzy coefficients and random variables into their deterministic equivalent according to the four possibilistic dominance indices introduced by Dubois and Prade (1983). To solve the general problem, in Aiche (1995) and Luhandjula (1996) a semi-infinite approach was proposed in order to convert it to a stochastic one which can solved by chance-constrainedprogramming (Charnes and Cooper 1959) or a two-stage program- ming method (Dantzig 1955). Luhandjula (2004) proposed an approach to transform constraints in the presence of fuzzy random variables into deterministic constraints, by comparing intervals obtained from prescribed cuts of fuzzy coefficients. Luhandjula and Gupta (1996) generalize robust programming with interval coefficients to the fuzzy stochastic framework, turning equality constraint is into fuzzy inclusion con- straints. These works are surveyed again in Luhandjula (2006). Luhandjula and Joubert (2010) further investigate optimisation models in a fuzzy stochastic environment and approaches to convert them into deterministic problems, focusing on the Gaussian case.
Keywords: fixed wireless networks; capacitated network design; network reliability; chance-constrainedprogramming; integer programming.
Fixed point-to-point wireless communications is a particular sector of the communication in- dustry that holds great promise for delivering private high-speed data connections by means of microwave radio transmission [And03]. Microwave, in the context of this work, refers to terrestrial fixed point-to-point digital radio communications, usually employing highly direc- tional antennas in clear line-of-sight and operating in licensed frequency bands from 6 GHz to 38 GHz. This makes microwave communications typically free of interference. The antennas used to transmit and receive the signal into/from free space are usually located at the top of communication towers. Two radios are required to establish a microwave link, whose capacity can attain 500 Mbps nowadays, between two locations that can be several kilometers apart, up to 50 km.
the optimality gap is closed less by the combination of (6) and (8) than only by the shifted cutset inequalities (8). Such a phenomenon can occur due to varying internal CPLEX cuts.
5 Concluding Remarks
In this paper, we have presented a chance-constrainedprogramming approach for the assignment of bandwidth in reliable fixed broadband wireless networks. We have proposed cutset inequalities and shifted cutset inequalities to enhance the computability of this problem. In our computational studies, we have discussed the optimality gap closed and compared the performance of the different cutset inequalities with and without internal CPLEX cuts. The results show that by the combination of the cutset and the shifted cutset inequalities, the optimality gap is closed by 41 % on average if the internal cuts for CPLEX are enabled.
Une curieuse institution existant en Italie nous mettra sur la voie de cette enquête en la rattachant immédiatement à notre objet. On trouve sur des billets de banque le texte dont voici la traduction : "Si vous trouvez ces 1000 lires, écrivez-en trois autres et vous serez heureux pendant sept ans. Si vous brisez le jeu, vous serez malheureux pendant sept ans." Il s'agit bien là d'une formule de chaîne réduite à sa plus simple expression, en dehors de toute référence à la sphère du religieux. L'essentiel est pourtant préservé, plus clair même que dans nos autres exemples : la découverte fortuite du billet, comme la réception d'une chaîne, vous place dans une situation de jeu forcé. Et c'est d'ailleurs par le terme de jeu que l'institution est désignée, ce qui permet de la rapprocher du principe de la loterie. Il faut encore noter que, dans cet exemple, la monnaie elle-même devient le support de la circulation d'un autre type de bien : la chance. Une situation analogue existe dans la chaîne de saint Martin de Porres, qui comporte une pièce collée sur la feuille de papier et s'intitule très exactement : "La peseta de la chance de saint Martin de Porres". Cela suppose que l'argent lui-même, dans sa matérialité d'espèce monétaire, devient un "porte-bonheur", et ce dans la mesure où il est mis en circulation. Les autres chaînes, du moins celles de saint Antoine et de la Nouvelle Angleterre, sont également placées sous le signe de la monnaie dans la mesure où elles font mention de plusieurs grosses sommes d'argent. Elles sont en même temps un capital de chance qui se désigne comme tel : "La chance vous est envoyée". Et c'est bien la chance elle-même qui
In this paper we introduce new semidefinite pro- gramming relaxations to box-constrained polynomial optimization programs (P). For this, we first reformu- late (P) into a quadratic program. More precisely, we recursively reduce the degree of (P) to two by substitut- ing the product of two variables by a new one. We ob- tain a quadratically constrained quadratic program. We build a first immediate SDP relaxation in the dimension of the total number of variables. We then strengthen the SDP relaxation by use of valid constraints that follow from the quadratization. We finally show the tightness of our relaxations through several experiments on box polynomial instances.
The kHNDP has been extensively investigated when there is only one demand in the network. In particular, the associated polytope has recieved a special attention. In , Huygens et al. study the kHNDP for |D| = 1, k = 2 and L = 2, 3. They give an integer programming formulation for the problem and show that the linear programming relaxation of this formulation completely describes the associated polytope. From this, they obtain a minimal linear description of that polytope. They also show that this formulation is no longer valid when L ≥ 4. In , Dahl et al. study the kHNDP when |D| = 1, L = 2 and k ≥ 2. They give a complete description of the associated polytope in this case and show that it can be solved in polynomial time using linear programming. In , Dahl considers the kHNDP for |D| = 1, k = 1 and L = 3. He gives a complete description of the dominant of the associated polytope Dahl and Gouveia  consider the directed hop-constrained path problem. They describe valid inequalities and characterize the associated polytope when L ≤ 3. Huygens and Mahjoub  study the kHNDP when |D| = 1, k = 2 and L ≥ 4. They also study the variant of the problem where k node-disjoint paths of length at most L are required between two terminals. They give an integer programming formulation for these two problems when L = 4.
PR2 box 4.338 1.036 10.395 PR2 cylinders 0.866 0.202 2.322
We have presented a new approach based on Quadratic Programming that can effectively generate paths while dealing with a set of constraints. Our approach takes advantage of relaxation to enlarge the range of valid configurations while planning and of analytic description of constraints to guide local motions. As a drawback, the proposed algorithm features many parameters which need to be tuned in order to guarantee good performances. The relaxation factor plays a relevant role: if constraint violations are not too strict, the search can proceed faster. Nonetheless, with an increased tolerance robots would need higher control action to re-project the samples back to the manifold.
s’appuyant sur la création d’une délé- gation interministérielle.
Deuxième piste : la lutte contre la pau- vreté des jeunes éloignés de l’emploi de qualité. Il paraît aujourd’hui néces- saire d’ouvrir les yeux sur la précarité qui touche les jeunes en difficulté d’in- sertion, qu’ils soient au chômage, en temps partiel subi ou en contrat d’al- ternance. Ne pourrait-on pas envisa- ger la mise en place d’un droit à une allocation d’insertion pour les jeunes de 18 à 29 ans engagés dans une dynamique de deuxième chance ? Cet engagement serait associé à un statut offert au jeune, statut officialisé par la signature d’un « contrat de deuxième chance » au sein d’une mission locale. Troisième piste : la mobilisation mas- sive des employeurs privés. Il s’agit de les conduire à remettre en cause leurs préjugés et, surtout, à franchir le « Rubicon » du diplôme. Nous suggé- rons donc une réflexion sérieuse quant à l’opportunité de quotas visant les jeunes de 18 à 29 ans ayant le sta- tut de jeunes engagés dans une dyna- mique de deuxième chance. Ces quo- tas pourraient être établis en élargissant ceux relatifs aux jeunes en contrat d’alternance, déjà en place dans les grandes entreprises. ■
between the solution of the original problem and the one where the measure µ has been replaced by ν.
Some numerical approaches. In the literature , chanceconstrained op- timal control problems have also been treated with other techniques, such as the Scenario Approach previously mentioned. Another alternative technique for solving chanceconstrained problems is the Monte Carlo method. A Monte Carlo algorithm consists in repeatedly sampling variables and parameters of a prob- lem in order to obtain numerical results, treating them as random quantities. This kind of approach might be very useful in the case of problems involving a high number of dimensions, many degrees of freedom or unknown probability distributions. The general procedure of a method belonging to the Monte Carlo class consists in performing the following steps:
on the theoretical probability bound could in practice imply taking unnecessary costly operation planning decisions.
To conclude, we underline that both the theoretical upper bound as well as the empirically estimated probability values were found to systematically decrease while growing the size of the scenario-set. These results imply the potential to solve chance-constrained operation planning problems while taking into account grid flexibility in the style of sequential ran- domization . Indeed, in principle, one may algorithmically achieve a set target on the constraint violation probability by progressively increasing the size of the scenario set under consideration. We will explore such potential in the subsequent stage of this research effort. The first practical challenge to further study is how to optimally trade-off the easier to evaluate, yet conservative, theoretical upper bound and the more accurate, yet computationally more costly, empirical estimation.
several distributions (Gaussian, Poisson or mixed) to determine the nature of chance events (normally distributed, random shock, or perhaps a combination of both).
Fifth, a reciprocal view of chance, choice and causal conditions has the potential to unlock a Pandora’s box of fresh research questions: Are some organisations luckier than others, and if so, why? If Pasteur was right, and fortune favours the ‘prepared mind’, what does preparedness mean in the context of organisations? More generally, how do organisations respond to chance? What degree of inefficiency is tolerable or necessary to enable occasional exploitation in an exploration-type environment? What are the legitimacy-granting mechanisms that allow for the pursuit of certain research trajectories but not others? Sixth, a view of chance, choice and causal conditions as necessarily reciprocally implicated enables one to use multiple theoretical lenses to illuminate what is, after all, a key question in strategy: How may we account for performance variations within and between firms? For instance, the industrial organization viewpoint expressed in the Structure-Conduct- Performance paradigm is not incommensurable with the Resource-based View. After all, strategic choices are, for the most part, informed by industry and market characteristics. They help legitimise strategic decisions and decide the relevance of core assets and capabilities. Industry characteristics set the stage for firm effects, while choices and random events leave their imprint on industry. Environment-based and resource-based explanations are not irreconcilable but recursively implicated. This conclusion is consistent with the position adopted by Child (1997), in which rival theories, while perhaps difficult to reconcile in their own philosophical terms, are not incommensurable when applied to the analysis of strategy. The propositions outlined above call for the application of multiple lenses by suggesting how they might be related. Finally, given our principal interest, namely that of deepening our understanding of causation in strategy (specifically the relationship between chance, choice and determinism), we hope to have made marginal progress towards that instant so skilfully described by William James: “Of some things we feel that we are certain…There is something that gives a click inside of us, a bell that strikes twelve, when the hands of our mental clock have swept the dial and meet over the meridian hour” (James, 1923: 13).
transformation of his tools was a key aspect of Jean’s work.
When he asked me to go with him to the Dogon country, I got the chance to share his passion and all of his amazing tricks with his camera, his lenses, and the equipment for the feedback session. Jean introduced me to the idea of itinerant cinema in the Bandiagara cliffs, with his favorite car, the Citroen 2CV, and equipped with a 16mm projector and a folding screen.