min and max function

Top PDF min and max function:

Pseudo-polynomial algorithms for min-max and min-max regret problems

Pseudo-polynomial algorithms for min-max and min-max regret problems

Keywords: min-max, min-max regret, computational complexity, pseudo-polynomial. 1 Introduction The definition of an instance of a combinatorial optimization problem requires to specify parameters, in particular objective function coefficients, which may be uncertain or imprecise. Uncertainty/imprecision can be structured through the concept of scenario which corresponds to an assignment of plausible values to model parameters. Each scenario s can be represented as a vector in IR m where m is the number of relevant numerical parameters. Kouvelis and Yu [2] proposed the maximum cost and maximum regret criteria, stemming from decision theory, to construct solutions hedging against parameters variations. In min-max optimization, the aim is to find a solution having the best worst case value across all scenarios. In min-max regret problem, it is required to find a feasible solution minimizing the maximum deviation, over all possible scenarios, of the value of the solution from the optimal value of the corresponding scenario. Two natural ways of describing the set of all possible scenarios S have been considered in the literature. In the interval data case, each numerical parameter can take any value between a lower and upper bound, independently of the values of the other parameters. Thus, in this case, S is the cartesian product of the intervals of uncertainty for the parameters. In the discrete scenario case, S is described explicitly by the list of all vectors s ∈ S. In this case, that is considered in this paper, we distinguish situations where the number of scenarios is bounded by a constant from those where the number of scenarios is unbounded.
En savoir plus

8 En savoir plus

Approximating min-max (regret) versions of some polynomial problems

Approximating min-max (regret) versions of some polynomial problems

Keywords: min-max, min-max regret, approximation, fptas, shortest path, min- imum spanning tree. 1 Introduction The definition of an instance of a combinatorial optimization problem requires to specify parameters, in particular objective function coefficients, which may be uncertain or imprecise. Uncertainty/imprecision can be structured through the concept of scenario which corresponds to an assignment of plausible values to parameters. There exist two natural ways of describing the set of all possible scenarios. In the interval data case, each numerical parameter can take any value between a lower and an upper bound. In the discrete scenario case, which is considered here, the scenario set is described explicitly. Kouvelis and Yu [6] proposed the min-max and min-max regret criteria, stemming from decision theory, to construct solutions hedging against parameters variations. The min- max criterion aims at constructing solutions having a good performance in the worst case. The min-max regret criterion, less conservative, aims at obtaining a solution minimizing the maximum deviation, over all possible scenarios, of the value of the solution from the optimal value of the corresponding scenario.
En savoir plus

10 En savoir plus

Refinment of the "up to a constant" ordering using contructive co-immunity and alike. Application to the Min/Max hierarchy of Kolmogorov complexities.

Refinment of the "up to a constant" ordering using contructive co-immunity and alike. Application to the Min/Max hierarchy of Kolmogorov complexities.

(∗) Consider the W i A ’s and ask for λ A-recursive. (∗∗) Consider the W A i ’s and ask for λ recursive. The second way, which is the stronger one, will be the one pertinent for applications to Kolmogorov complexities. Of course, to deal with (∗∗), we must consider uniform enumerations of A-r.e. sets and partial A-recursive functions (cf. Prop.2.7), i.e. we have to consider the notion of constructive density with functionals. This will, in fact, give a strong version of (∗∗) in which λ is a total recursive function which does not depend on A.
En savoir plus

42 En savoir plus

The ridge method for tame min-max problems

The ridge method for tame min-max problems

For a path-differentiable F , does the PO formula define a conservative field for f ? The answer to this question is negative in general, we provide a counterexample. However for definable functions the answer turns out to be positive. The proof of the latter result relies on a characterization of definable conservative fields based only on definable paths, which is of independent interest. In the context of conservativity, the definable case plays a special role as it is widepread in applications [13, 14] and a many further properties are available [14, 29, 23]. The reader unfamiliar with definability may consider instead semialgebraicity, which is a special case, a function being semialgebraic when its graph can be represented as the finite union of solution sets of polynomial systems involving finitely many equalities and inequalities. Section 3 exposes basic definitions and more details regarding definability.
En savoir plus

29 En savoir plus

Max-Min Lyapunov Functions for Switching Differential Inclusions

Max-Min Lyapunov Functions for Switching Differential Inclusions

obtain a candidate Lyapunov function by taking the maxi- mum, minimum, or the combination of both; see Definition 3 for details. Such max-min type of Lyapunov functions were recently proposed in the context of discrete-time switching systems [6], [7]. In this article, we investigate the feasibility and utility of max-min Lyapunov functions, for differential inclusion and switching systems in continuous-time, which naturally require certain additional tools from nonsmooth and set-valued analysis. Our main results provide a set of inequal- ities whose feasibility guarantees the existence of a max-min Lyapunov function for system (1). When restricting ourselves to the linear case with f i ( x) = A i x, the proposed conditions require solving bilinear matrix inequalities (BMIs). It should be noted that, since we allow for the minimum operation in the construction, certain elements in our proposed class of Lyapunov functions are nonconvex. For the linear DI problem, it has been observed in [3, Proposition 2.2] that the convexification of any non-convex Lyapunov function is still a Lyapunov function. In our approach, when we construct a homogeneous of degree 2 nonconvex Lyapunov function for the LDI problem, a convexification of such functions also provides a Lyapunov function.
En savoir plus

7 En savoir plus

Max-Min Lyapunov Functions for Switched Systems and Related Differential Inclusions

Max-Min Lyapunov Functions for Switched Systems and Related Differential Inclusions

In this article, the problem of interest is to construct a Lyapunov function for systems (1) and (2) which guarantees asymptotic stability of the origin {0} ⊂ R n . We consider the Lyapunov functions obtained by taking the maximum, minimum, or their combination over a finite family of continuously differentiable pos- itive definite functions, see Definition 3 for details. Such max-min type of Lyapunov functions were recently proposed in the context of discrete-time switching systems [1], [2]. For the continuous-time case treated in this paper, studying this class of functions naturally requires certain additional tools from nonsmooth and set-valued analysis, and one such fundamental tool is the generalized directional derivative. In our confer- ence paper [15], we provide stability results based on Clarke’s notion of generalized directional derivative for max-min functions. The construction of non-smooth Lyapunov functions for system (2) using the Clarke’s generalized gradient concept is also presented in [5]. However, this notion turns out to be rather conservative as is seen in several examples (including the one given in Section 2). To overcome this conservatism due to Clarke’s generalized derivative, we work with the set-valued Lie derivative, which is formally introduced in Definition 2. Focusing on this latter notion of generalized directional derivative for the class of max-min Lyapunov functions, the major contributions of this paper are listed as follows:
En savoir plus

27 En savoir plus

Approximation of max independent set, min vertex cover and related problems by moderately exponential algorithms

Approximation of max independent set, min vertex cover and related problems by moderately exponential algorithms

In Table 3, we perform a comparative study of running times for Algorithms IS, and RIS2, for some ratio values. As one can see from Tables 2 and 3, Algorithm RIS2 dominates Algorithm RIS1 since, in fact, the former is a refinement of the latter. The second improvement follows a different approach, based upon an exhaustive lookup of all the candidate values for α(G), and using an exact algorithm for min vertex cover rather than for max independent set. Informally, the underlying idea for this approach (leading to Algorithm RIS3) is that randomization allows to split the input graph into “small” subgraphs, on which a fixed-parameter algorithm can be efficiently used to reach both a good overall running time and any a priori fixed approximation ratio. Then, Algorithm RIS3 consists of running Algorithm OPT_VC on subgraphs of size βn < rn taken at random and for a sufficient number of times, where β is optimally determined as a function of α(G).
En savoir plus

26 En savoir plus

Min-Max Coverage in Multi-interface Networks

Min-Max Coverage in Multi-interface Networks

16. W A (v) := W (v) Proof. The proof is based on the analysis of Algorithm 1. The case k = 1 is trivial and it is solved by code lines 2–3 of the algorithm. When k = 2, either there exists one common interface for all the nodes (again code lines 2–3 of Algorithm 1), or the optimal solution costs 2 which equals to activate all the available interfaces at all the nodes (code lines 5–7). Note that in this case code lines 8–16 are not executed as no node holds more than 2 interfaces. When k = 3, if there exists a solution of cost 1 (code lines 2–3), again it is easily verifiable by checking whether all the nodes hold one same interface. If not, in order to check whether there exists a solution of cost 2, it is possible to activate all the interfaces at the nodes holding less than 3 interfaces. This can be realized as at code lines 8–16. For each node v holding 3 interfaces, it is possible to check whether at most 2 interfaces among the available 3 are enough to connect v to all its neighbors holding less than 3 interfaces. If not, then the optimal solution costs 3 and all the nodes can activate all their interfaces to accomplish the coverage task (code lines 15–16). If yes, then v activates the 2 interfaces induces by its neighborhood (code lines 9–10); if only 1 or 0 interfaces are induced by the neighborhood then v activates one further (code line 11) or two interfaces (code lines 13–14), respectively, chosen arbitrarily. In this way, all the edges connecting nodes holding at most 2 interfaces and all the edges connecting nodes holding 3 interfaces with nodes holding at most 2 interfaces are covered. In order to conclude the proof, we need to show that all the edges between nodes holding 3 interfaces are covered by the designed activation function. Indeed, since each node holding 3 interfaces activates 2 interfaces, every two of such neighbors must share at least one common interface, and the claim holds. The above algorithm requires O(m) time, as the execution of code lines 9–11 might refer to all the
En savoir plus

13 En savoir plus

Complexity of the min-max (regret) versions of cut problems

Complexity of the min-max (regret) versions of cut problems

4.2 Min-max regret versions When the number u ≤ m of uncertain/imprecise parameters, corresponding to non-degenerate intervals, is small enough, then the problem becomes polynomial. More precisely, as shown by Averbakh and Lebedev [5] for general networks problems solvable in polynomial time, if u is fixed or bounded by the logarithm of a polynomial function of m, then the min-max regret version is also solvable in polynomial time (based on the fact that an optimal solution for the min-max regret version corresponds to one of the optimal solutions for the 2 u extreme
En savoir plus

11 En savoir plus

Direct-Search for a Class of Stochastic Min-Max Problems

Direct-Search for a Class of Stochastic Min-Max Problems

Direct-search methods for minimization prob- lems The general principle behind direct-search methods is to optimize a function f (x) without hav- ing access to its gradient ∇f (x). There is a large number of algorithms that are part of this broad family including golden-section search techniques or random search (Rastrigin, 1963). Among the most popular algorithms in machine learning are evolution strategies and population-based algorithms that have demonstrated promising results in reinforcement learn- ing (Salimans et al., 2017; Maheswaranathan et al., 2018) and bandit optimization (Flaxman et al., 2004). At a high-level, these techniques work by maintain- ing a distribution over parameters and duplicate the individuals in the population with higher fitness. Of- ten these algorithms are initialized at a random point and then adapt their search space, depending on which area contains the best samples (i.e. the lowest func- tion value when minimizing f (x)). New samples are then generated from the best regions in a process re- peated until convergence. The most well-known algo- rithms that belong to this class are evolutionary-like algorithms, including for instance CMA-ES (Hansen et al., 2003). Evolutionary strategies have recently been shown to be able to solve various complex tasks in reinforcement learning such as Atari games or robotic control problems, see e.g. Salimans et al. (2017). Their advantages in the context of reinforcement learning are their reduced sensitivity to noisy or uninforma- tive gradients (potentially increasing their ability to avoid local minima (Conti et al., 2017)) and the ease with which one can implement a distributed or parallel version.
En savoir plus

12 En savoir plus

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack

Keywords: min-max, min-max regret, approximation, fptas, shortest path, mi- nimum spanning tree, knapsack. 1 Introduction The definition of an instance of a combinatorial optimization problem requires to specify parameters, in particular objective function coefficients, which may be uncertain or imprecise. Uncertainty/imprecision can be structured through the concept of scenario which corresponds to an assignment of plausible values to model parameters. There exist two natural ways of describing the set of all possible scenarios. In the interval data case, each numerical parameter can take any value between a lower and an upper bound. In the discrete scenario case, the scenario set is described explicitly. In this case, that is considered in this paper, we distinguish situations where the number of scenarios is bounded by a constant
En savoir plus

12 En savoir plus

A Hölderian backtracking method for min-max and min-min problems

A Hölderian backtracking method for min-max and min-min problems

x∈R d max y∈Y L(x, y), (1.1) where Y is a constraint set, and L is a given cost function. This structure happens to be ubiquitous in optimization and game theory, but generally under assumptions that are not those met in learning. In optimization it stems from the Lagrangian approach and duality theory, see e.g., [10, 6, 12], while in game theory it comes from zero-sum 2-players games, see e.g., [35, 36, 24]. Dynamics for addressing (1.1) have thus naturally two types. They may be built on strategic considerations, so that algorithms correspond to a sequence of actions chosen by antagonistic players, see [24] and references therein. In general these methods are not favorable to optimization because the contradictory interests of players induce oscillations and slowness in the identification of optimal strategies. Optimization algorithms seem more interesting for our purposes because they focus on the final result, i.e., finding an optimal choice x, regardless of the adversarial strategy issues. In that respect, there are two possibilities: the variational inequality approach which treat minimization and maximization variables on an equal footing, see e.g. [22, 27, 12] or [26, 21, 17] in learning. On the other hand, some methods break this symmetry, as primal or augmented Lagrangian
En savoir plus

20 En savoir plus

Towards min max generalization in reinforcement learning

Towards min max generalization in reinforcement learning

Abstract. In this paper, we introduce a min max approach for address- ing the generalization problem in Reinforcement Learning. The min max approach works by determining a sequence of actions that maximizes the worst return that could possibly be obtained considering any dynamics and reward function compatible with the sample of trajectories and some prior knowledge on the environment. We consider the particular case of deterministic Lipschitz continuous environments over continuous state spaces, finite action spaces, and a finite optimization horizon. We dis- cuss the non-triviality of computing an exact solution of the min max problem even after reformulating it so as to avoid search in function spaces. For addressing this problem, we propose to replace, inside this min max problem, the search for the worst environment given a sequence of actions by an expression that lower bounds the worst return that can be obtained for a given sequence of actions. This lower bound has a tightness that depends on the sample sparsity. From there, we propose an algorithm of polynomial complexity that returns a sequence of actions leading to the maximization of this lower bound. We give a condition on the sample sparsity ensuring that, for a given initial state, the pro- posed algorithm produces an optimal sequence of actions in open-loop. Our experiments show that this algorithm can lead to more cautious policies than algorithms combining dynamic programming with function approximators.
En savoir plus

18 En savoir plus

Domains for Dirac-Coulomb min-max levels

Domains for Dirac-Coulomb min-max levels

DOMAINS FOR DIRAC-COULOMB MIN-MAX LEVELS MARIA J. ESTEBAN, MATHIEU LEWIN, AND ´ ERIC S´ ER ´ E Abstract. We consider a Dirac operator in three space dimensions, with an electrostatic (i.e. real-valued) potential V (x), having a strong Coulomb-type singularity at the origin. This operator is not always essentially self-adjoint but admits a distinguished self-adjoint extension D V . In a first part we obtain new results on the domain of this extension, complementing previous works of Esteban and Loss. Then we prove the validity of min-max formulas for the eigenvalues in the spectral gap of D V , in a range of simple function spaces independent of V . Our results include the critical case lim inf x→0 |x|V (x) = −1, with units such that ~ = mc 2 = 1, and they are the first ones in this situation. We also give the corresponding results in two dimensions.
En savoir plus

45 En savoir plus

Edmonton Max

Edmonton Max

Anonymous from Edmonton Institution.. Journal of Prisoners on Prisons.[r]

3 En savoir plus

Max Jacob peintre

Max Jacob peintre

lettre à René Rimbert que certains commentateurs ont pris trop au sérieux 64 : Jacob y affirme avec beaucoup d’humour et un peu d’humeur n’avoir pas fait de cubisme, parce qu’il aimait la beauté et la sensualité ! En réalité, on peut douter que l’admirateur de Cézanne, l’ami de Picasso, le proche de Reverdy (malgré les différences qui les séparent), ait adhéré au discours simpliste des Vauxcelles et autres critiques qui réduisaient le cubisme à de froides constructions cérébrales. Max Jacob savait la place de l’émotion, défendue par Braque et Reverdy en particulier. Il a par ailleurs sans cesse soutenu l’idée qu’un tableau « est toujours un tableau, c’est-à-dire une forme voulue et non une représentation de quelque chose, fût-ce
En savoir plus

17 En savoir plus

Navier-Stokes dynamical shape control : from state derivative to Min-Max principle

Navier-Stokes dynamical shape control : from state derivative to Min-Max principle

Unité de recherche INRIA Sophia Antipolis 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex France Unité de recherche INRIA Lorraine : LORIA, Technopôle de Nancy-Brabois - [r]

61 En savoir plus

Relaxation schemes for min max generalization in deterministic batch mode reinforcement learning

Relaxation schemes for min max generalization in deterministic batch mode reinforcement learning

Acknowledgments Raphael Fonteneau is a postdoctoral fellow of the FRS-FNRS. This paper presents research results of the European Network of Excellence PASCAL2 and the Belgian Network DYSCO funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office. The authors thank Yurii Nesterov for pointing out the idea of using Lagrangian relaxation.

6 En savoir plus

Edmonton Max Reflections

Edmonton Max Reflections

Journal of Prisoners on Prisons.. Luqman Osman[r]

3 En savoir plus

lambda-Min Decoding Algorithm of Regular and Irregular LDPC Codes

lambda-Min Decoding Algorithm of Regular and Irregular LDPC Codes

1. INTRODUCTION With the choice of LDPC (low-density parity- check) codes as a standard for DVB-S2, VLSI ar- chitectures for LDPC code decoders become a real challenge. In order to decrease the complexity of the decoding algorithm, Fossorier and al. proposed simplified versions of the Belief Propagation (BP) algorithm named BP-Based, offset BP-based [1], [2], [3]. These algorithms are efficient for regular LDPC codes with small length, but they introduce some sig- nificant degradation (up to almost 1 dB) for LDPC codes with high degree and high length. In this pa- per, we propose a new decoding algorithm named λ−min algorithm which offers a complexity-perfor- mance trade-off between BP (optimal) and BP-based algorithm. Moreover, we study the VLSI implemen- tation of the sub-optimal algorithm and we propose a parity check processor which enables to compute effi- ciently the λ−min algorithm and which reduces the memory required to save extrinsic information be- tween two decoding iterations. The rest of the paper is organized as follows. In section 2, the BP, the BP- based and the new λ−min algorithms are described. Simulation results are given in section 3 and an op- timization of the λ−min algorithm is proposed. Sec- tion 4 describes an efficient serial architecture to pro- cess the λ−min algorithm and a conclusion is given in section 5.
En savoir plus

5 En savoir plus

Show all 10000 documents...