Haut PDF Contributions to polynomial interpolation in one and several variables

Contributions to polynomial interpolation in one and several variables

Contributions to polynomial interpolation in one and several variables

3.1. Introduction Kergin and Hakopian interpolants were introduced independently about thirty years ago as natural multivariate generalizations of univariate Lagrange interpolation. The con- struction of these interpolation polynomials requires the use of points, usually called nodes, with which one obtains a number of natural mean value linear forms which provide the interpolation conditions. Kergin interpolation polynomials also interpolate in the usual sense, that is, the interpolation polynomial and the interpolated function coincide on the set of nodes but this condition no longer characterizes it. The general definition is recalled below. Approximation properties of Kergin and Hakopian interpolation polynomials have been deeply investigated, see e.g., [3, 5, 8, 9, 23]. Elegant results were in particular ob- tained in the two-dimensional case when the nodes forms a complete set of roots of unity
En savoir plus

81 En savoir plus

Maskara: Compilation of a Masking Countermeasure with Optimised Polynomial Interpolation

Maskara: Compilation of a Masking Countermeasure with Optimised Polynomial Interpolation

Agosta et al. [9] propose a vulnerability assessment of program instructions in symmetric ciphers. The vulnerability is quantified as the minimum number of key bits influencing the output value of a sensitive instruction. Given a vulnerability threshold, a boolean masking countermeasure is selectively applied on the hardened program such that the performance overhead is mitigated as compared to a fully masked imple- mentation. The vulnerability analysis is achieved by data-flow analysis (forward or backward), and leverages control-flow transformations such as loop peeling and if-conversion. Loop peeling is particularly efficient as it enables to separate loop iterations where variables depend on a low number of key bits from iterations where variables depend on a high number of key bits. In our approach, we assume that any variable that depends on a secret can leak sensitive information as it would be required to not only target ciphers. Thus, our approach does not require to modify the control flow: we propose two data-flow algorithms that iterate over the code to identify the sensitive instructions to mask and to find the set of lists of masks associated to each variable. Our confidentiality analysis is similar to the forward analysis proposed in [9] while working at the variable level with uniform typing rules for dealing with loads and stores. Moreover, we detail how we transform the code and propose a fine-grained remasking analysis, whereas [9] lacks information about these steps. However, it would be possible to integrate the vulnerability as- sessment proposed by Agosta et al. in our approach, and apply the masking countermeasure accordingly, without changing neither the transformation step nor the remasking analysis.
En savoir plus

13 En savoir plus

Motions of Julia sets and dynamical stability in several complex variables

Motions of Julia sets and dynamical stability in several complex variables

Theorem A’. Let f be a holomorphic family of polynomial-like maps of large topological degree. Then Misiurewicz parameters are contained in the support of the bifurcation current dd c L. Establishing this theorem is the main difference between the second chapter and the work of Berteloot and Dupont [BD14b] for endomorphisms of P k . Indeed, their approach to this statement relies on the existence of a potential – the Green function – for the Green current EGreen. We thus need to adopt a different approach, completely rethinking the strategy of proof. The idea is the following: in dimension 1, a Misiurewicz parameter is responsible (because of the expansive behaviour of the system at the intersection between the repelling cycle and the postcritical set) of the non-normality of the critical orbit. Moreover, a Misiurewicz parameter is never isolated: it is quite straightforward to see that the existence of one Misiurewicz parameter implies the existence of many others nearby. This is related to a large growth of the mass of the postcritical set in this region of the parameter space. A crucial step in establishing the result above in thus proving Theorem B. The proof of this result, as most of the material in the first part of this work, relies on the theory of slicing of currents. We give in Appendix A.1 a brief account of this theory.
En savoir plus

150 En savoir plus

Rewriting integer variables into zero-one variables: some guidelines for the integer quadratic multi-knapsack problem

Rewriting integer variables into zero-one variables: some guidelines for the integer quadratic multi-knapsack problem

The problem (QMKP) which is a NP-hard problem [3] is a generalization of both the integer quadratic knapsack problem [2] and the 0-1 quadratic knapsack problem where the objective function is subject to only one constraint [1]. Since, (QMKP) is NP-hard, one should not expect to find a polynomial time algorithm for solving it exactly. Hence, we are usually interested in developing branch-and- bound algorithms. A key step in designing an effective exact solution method for such a maximization problem is to establish a tight upper bound on the optimal value. Basically, the available upper bound procedures for (QMKP) may be classified as attempting either to solve efficiently the LP-relaxation of (QMKP) (see [2] and [8]) or to find a good upper bound, of better quality than the LP-relaxation of (QMKP), transforming (QMKP) into a 0-1 linear problem easier to solve (see [4] and [9]). To the best of our knowledge, the upper bound method we have proposed in a previous work [11] is better than the existing methods (Djerdjour, Mathur and Salkin algorithm [4], a 0-1 linearization method, a classical LP-relaxation of (QMKP)) from both a qualitative and a computational standpoint. We have first used a direct expansion of the integer variables, originally suggested by Glover [7], and apply a piecewise interpolation to the objective function: an equivalent 0-1 linear problem is thus obtained. The second step of the algorithm consists in establishing and solving the surrogate relaxation problem associated to the equivalent linearized formulation.
En savoir plus

18 En savoir plus

On a decomposition of polynomials in several variables

On a decomposition of polynomials in several variables

eral variables as the sum of values of univariate polynomials taken at linear combinations of the variables. K. Oskolkov has called my attention to the following theorem used in the theory of polynomial approximation (see [6], Lemma 1 and below, Lemma 4): for every sequence of d + 1 pairwise linearly independent vec-

21 En savoir plus

Contributions of local, lateral and contextual habitat variables to explaining variation in fisheries productivity metrics in the littoral zone of a reservoir

Contributions of local, lateral and contextual habitat variables to explaining variation in fisheries productivity metrics in the littoral zone of a reservoir

Discussion Relative contributions of local, lateral and contextual habitat variables were compared to explain variations in abundance, biomass and richness metrics of fish in the littoral zone of a reservoir. Variable type assessment showed that local and contextual habitat variables contributed similarly to explaining variation across abundance, biomass and richness metrics, whereas lateral habitat variables contributed minimally. While it is clear local habitat variables on average explained more variation across metrics (21% R 2 adj), the proportion of variation explained by contextual habitat variables (14% R 2 adj) cannot be overlooked. Similar explanatory power among local and contextual habitat variables suggests both fine- and broad- scale habitat variables explain variation in FPM. This finding is supported by Wang et al. (2003) who found local (reach-scale) variables explained the most inter-river variation in abundance (21% R 2 ), succeeded by contextual (watershed-scale) variables (11% R 2 ); lateral (riparian-scale) variables explained the least variation (5% R 2 ). Bouchard and Boisclair (2008) also observed that local habitat variables explained most variation in intra-river abundance (31% R 2 adj), succeeded by contextual (longitudinal) variables (1% R 2 adj); lateral variables failed to explain any variation. In both studies, including local and contextual variables increased explanatory power of models, as opposed to using only local variables to explain variation. Brind'Amour et al. (2005) also observed that fetch (contextual variable) and to a lesser degree macrophytes (local variable) best explained variation in fish community composition in the littoral zone of a Québec lake. The present study reinforces these findings previously made in inter- and intra-river and lake studies, broadening them to intra-reservoir studies. Among studies, similar values of variation explained among several types of habitat variables suggest that the ability to explain variation in FPM in rivers, lakes and reservoirs is similar.
En savoir plus

42 En savoir plus

General Interpolation by Polynomial Functions of Distributive Lattices

General Interpolation by Polynomial Functions of Distributive Lattices

Even though primarily defined over real intervals, Sugeno integrals can be extended to wider domains (not only to arbitrary linearly ordered sets or chains, but also to bounded distributive lattices with bottom and top elements 0 and 1, respectively) via the notion of lattice polynomial function. Essentially, a lattice polynomial function is a combination of variables and constants using the lattice operations ∧ and ∨. As it turned out (see e.g. [2, 9]), Sugeno integrals coincide exactly with those lattice polynomial functions that are idempotent (that is, which preserve constant tuples); in fact, it can be shown that the preservation of 0 and 1 suffices.
En savoir plus

10 En savoir plus

High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs

High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs

is defined from the parameter domain P to the solution space V . Parametric problems of this type arise in stochastic and deterministic modelling, depending on the nature of the parameters y j which may either be random or deterministic variables. In both settings, the main computational challenge is to approximate the entire solution map y 7→ u(y) up to a prescribed accuracy, with reasonable computational cost. This task is particularly difficult when the number of involved parameters d, is large due to the curse of dimensionality. This notion was coined by Bellmann in the early 1960’s and refers to an exponential growth of the computational work for reaching a given target accuracy as the dimension d of the parameter space
En savoir plus

33 En savoir plus

Sparse polynomial interpolation: sparse recovery, super resolution, or Prony?

Sparse polynomial interpolation: sparse recovery, super resolution, or Prony?

Contribution • We propose a new multivariate variant of Prony’s method for sparse polynomial interpolation, which avoids projections in one variables and requires a small number of evaluations. In particular, it differs from approaches such as [19], which uses special “aligned” moments to apply Prony univariate method. In the univariate case, the new method only requires r + 1 evaluations (instead of 2r) where r is the number of monomials of the blackbox polynomial. Similarly in the multivariate case, we show that the number of needed evaluations is significantly reduced. It involves a Toeplitz matrix rather than a Hankel matrix. Numerical experiments confirm the theoretical result regarding the number of evaluations and robustness against perturbations. This new multivariate Toeplitz-Prony can be seen as an extension of ESPRIT methods in several variables. As stated in [39][p.167], ESPRIT should be preferred to MUSIC for frequency estimation in signal processing. The numerical experiments corroborate this claim by showing the good numerical behavior of the new multivariate Toeplitz- Prony method.
En savoir plus

27 En savoir plus

Greedy Algorithms and Rational Approximation in One and Several Variables

Greedy Algorithms and Rational Approximation in One and Several Variables

Cyclic AFD was designed to approach the problem of n-best rational approxima- tion in H 2 (D). The problem is formulated as follows. Given f ∈ H 2 (D), find a ra- tional function of the form p/q, with deg{p} and deg{q} not exceeding n, and q zero- free inside the unit disc, such that k f − p/qk is minimum among all possible ratio- nal functions of the same kind. The latter are just the rational functions of degree no larger than n in H 2 (D). Existence of such minimizing rational functions was proved a long time ago, but a theoretical algorithm for finding p/q to give rise to the min- imum is still an open issue. A detailed account of the problem may be found in [4, 11, 5, 12, 14]. Both the RARL2 algorithm (which extends to the matrix-valued case, see www-sop.inria.fr/apics/RARL2/rarl2.html for a description and tutorial as well as [7, 9, 18] for further references) and the one through Cyclic AFD ([29]) provide practical algorithms. RARL2 is a descent algorithm using Schur parameters to describe Blaschke products of given degree along with a compactification thereof to ensure convergence to a local optimum. It is used in identification and design of microwave devices, see [27, 39]. The algorithm using Cyclic AFD is parameterized by the zeros of the denominator polynomial, and uses the fact that the expansion as a sum of modified Blaschke products
En savoir plus

12 En savoir plus

On the Stability of Polynomial Interpolation Using Hierarchical Sampling

On the Stability of Polynomial Interpolation Using Hierarchical Sampling

In addition, unlike Leja sequences on [−1, 1], theses sequences are easy to construct and have explicit formulas. In [4], their bounds were improved to 2k and 5k 2 log k, respectively. In this paper, we improve further these bounds and give direct bounds for the norms D k of the difference operators, which are useful in view of the discussion in the previous section. Our techniques of proof share several common points with those developed in [2, 3, 4], yet it is shorter and exploit to a considerable extent the properties of Leja sequences on the unit disk.
En savoir plus

22 En savoir plus

Contributions to deep reinforcement learning and its applications in smartgrids

Contributions to deep reinforcement learning and its applications in smartgrids

In this chapter, we first propose a novel and detailed formalization of the problem of sizing and operating microgrids under different as- sumptions on the components used (PV panels and storage systems). In that context, we show how to optimally operate a microgrid so that it minimizes a levelized energy cost (LEC) criterion in the con- text where the energy production and demand are known. We show that this optimization step can be achieved efficiently using linear pro- gramming techniques (thanks to the assumptions on the components used and with the help of auxiliary variables in the linear program). We then show that this optimization step can also be used to address the problem of optimal sizing of the microgrid (still with the hypoth- esis that production and demand are known), for which we propose a (potentially) robust approach by considering several energy produc- tion and consumption scenarios. We run experiments using real data corresponding to the case of typical residential consumers of electric- ity located in Spain and in Belgium. Note that this chapter focuses on the production planning and optimal sizing of the microgrid without uncertainty. The real-time control under uncertainty of microgrids is studied in the next chapter.
En savoir plus

177 En savoir plus

en
                                                                    fr

en fr Frontal and parietal contributions to visual perception in humans Contributions frontales et pariétales à la perception visuelle humaine

Our data indicate that the FEF TMS visual facilitatory effects interacted with the orienting of spatial attention engaged by means of predictive spatial cues. Nonetheless, given the frequently hypothesized role of the right FEF not only as a crucial node of the dorsal attentional network but also as relevant in providing access to consciousness, which of these two systems might have been ultimately responsible for the observed visual facilitatory effects remains unclear. Contributing to the discussion of this issue, our data reveal that FEF TMS neither when used in isolation (Experiment 1) nor when combined with visuo-spatial cues (Experiment 2) did modulate the reaction times or accuracy levels for the visual categorization task. A behavioral study performed and published separately by our group assessed the behavioral effects of visuo- spatial attentional orienting in the same exact paradigm, and showed significant shorter reaction times in response to stimuli presented at attended than unattended locations (see [26] Exper- iment 4 for details). The latter effects, which were accompanied by a modulation in perceptual sensitivity in the detection task only when the cue was predictive about target location, strongly suggest that cue- validity effects in such paradigm should be considered a solid signature of attentional orienting. On such basis, it is tempting to interpret the current lack of reaction time modulations for the categorization task, accompanying improvements in visual detection by FEF pre-target activity modulations, not as ultimately mediated by the manipulation of visuo-spatial orienting processes but reflecting a genuine effect of right FEF TMS on visual consciousness. In spite of obvious differences between intact and damaged systems, this interpretation could be in agreement with patient work showing a relevant role of the prefrontal cortex in access to consciousness of masked stimuli, not accountable either by attentional orienting processes [45]. Nonetheless, given that attention can alter appear- ance [3] and that in our paradigm composed of two serial tasks, subjects could have eventually sacrificed reaction time for accuracy, or categorization performance for detection performance, whether attention can modulate conscious visibility without affecting reaction time remains an open question.
En savoir plus

167 En savoir plus

Contributions to decomposition methods in stochastic optimization

Contributions to decomposition methods in stochastic optimization

Dans un premier temps nous faisons une pr´esentation simple et succincte de cette m´ethode, puis nous pr´esentons les r´esultats principaux du chapitre 2 qui ´etend la Programmation Dyna[r]

252 En savoir plus

Polynomial equivalence problems and applications to multivariate cryptosystems

Polynomial equivalence problems and applications to multivariate cryptosystems

Unité de recherche INRIA Rocquencourt Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex France Unité de recherche INRIA Futurs : Domaine de Voluceau - Rocquencourt - B[r]

26 En savoir plus

Cubic and bicubic spline interpolation in Python

Cubic and bicubic spline interpolation in Python

not-a-knot end condition defined by the continuity of the third derivatives at the junction of the first two patches facing the 4 edges, 2.4.1 Free end condition Free end conditions are also easy to set up on parametric surfaces since directly transposable from the two-dimensional case. Indeed, relations between control points written at the spline extremities are the same to consider at the parametric surface edges. The first 3 control points on the four edges must be aligned and separated two by two by the same distance. This choice of end conditions for a parametric surface results in the following relations:
En savoir plus

52 En savoir plus

Faster One Block Quantifier Elimination for Regular Polynomial Systems of Equations

Faster One Block Quantifier Elimination for Regular Polynomial Systems of Equations

Prior works. The real quantifier elimination is a fundamental problem in mathematical logic and computational real algebraic geometry. It naturally arises in many problems in diverse application areas. The works of Tarski and Seidenberg [ 39 , 32 ] imply that the projection of any semi-algebraic set is also semi-algebraic and give an algorithm, which is however not elementary recursive, to compute this projection. The Cylindrical Algebraic Decomposition (CAD) [ 8 ] is the first effective algorithm for this problem whose complexity is doubly exponential in the number of indeterminates [ 11 ]. Since then, there have been extensive researches on developing this domain. We can name the CAD variants with improved projections [ 24 , 19 , 25 , 6 ] or the partial CAD [ 9 ]. Following the idea of [ 17 ] that exploits the block structure, [ 28 , 3 ] introduced algorithms of only doubly exponential complexity in the order of quantifiers (number of blocks). For one-block quantifier elimination, the arithmetic complexity and the degree of polynomials in the output of these algorithms are of order 𝑠 𝑛+1 𝐷 𝑂(𝑛𝑡) where 𝐷 is the bound on the degree of input polynomials (see [ 4 , Algo 14.6]). However, obtaining efficient implementations of these algorithms remains challenging. We also cite here some other works in real quantifier elimination [ 41 , 38 , 40 , 7 , 36 ] and applications to other fields [ 23 , 1 , 37 ].
En savoir plus

16 En savoir plus

Identification of multi-modal random variables through mixtures of polynomial chaos expansions

Identification of multi-modal random variables through mixtures of polynomial chaos expansions

1 Introduction Uncertainty quantification and propagation in physical systems appear as a critical path for the improvement of the prediction of their response. For the numerical estimation of outputs of stochastic systems driven by finite- dimensional noise, the so-called spectral stochastic methods [7,13,20,9] have received a growing attention in the last two decades. These methods rely on a functional representation of random outputs, considered as second order random variables, by using truncated expansions on suitable Hilbertian basis. Classical basis consist in polynomial functions (finite-dimensional Polynomial Chaos [19,2,7]), piecewise polynomial functions [3,8,18] or more general or- thogonal basis [16]. Of course, the accuracy of predictions depends on the quality of the input probabilistic model. Some works have been recently de- voted to the identification of random variables (or processes), from a collection of samples, using Polynomial Chaos (PC) representations. Classical inference techniques have been used to identified the coefficients of functional expan- sions, such as maximum likelihood estimation [4,17] or Bayesian inference [6,1]. Polynomial Chaos a priori allows for the representation of second order random variables with arbitrary probability laws. However, for some classes of random variables, classical PC expansions may exhibit very slow convergence rates, thus requiring very high order expansions for an accurate representa- tion. When introducing such representations for random input parameters of a physical model, very high order expansions are also required for an accurate approximation of random outputs. Classical spectral stochastic methods, such as Galerkin-type methods, then require to deal with high-dimensional approx- imation spaces, which leads to prohibitive computational costs. Although the use of efficient solvers or model reduction techniques based on separated rep- resentations [11,12,14] may help to reduce computational costs, a convenient alternative consists in identifying more suitable representations of random in- puts.
En savoir plus

11 En savoir plus

An axiomatic approach to image interpolation

An axiomatic approach to image interpolation

processing. A number of di erent approaches using interpolation techniques have been proposed in the literature for 'perceptually motivated' coding applications [5, 17, 22]. The underlying image model is based on the concept of 'raw primal sketch' [18]. The image is assumed to be made mainly of areas of constant or smoothly changing intensity separated by discontinuities represented by strong edges. The coded information, also known as sketch data, consists of the geometric structure of the discontinuities and the amplitudes at the edge pixels. In very low bit rate applications, the decoder has to reconstruct the smooth areas in between by using the edge information. This can be posed as a scattered data interpolation problem from an arbitrary initial set (the sketch data) under certain smoothness constraints. For higher bit rates, the residual texture information has to be separately coded by means of awaveform coding technique, for instance, pyramidal or transform coding. In the following we assume that a set of curves and points is given and wewant to construct a function interpolating these data. Several interpolation techniques using implicitely or explicitely the solution of a partial di erential equation have been used in the engineering literature [4, 5, 6]. In the spirit of [1], our approach to the problem will be based on a set of formal requirements that any interpolation operator in the plane should satisfy. Then we show that any operator which interpolates continuous data given on a set of curves can be given as the viscosity solution of a degenerate elliptic partial di erential equation of a certain type. The examples include the Laplacian operator and the minimal Lipschitz extension operator [2], [15] which is related to the work of J. Casas [5, 6]. We also discuss other interpolation schemes proposed in the literature.
En savoir plus

30 En savoir plus

Le modèle de hotelling : contributions et limites (application au cas où les variables stratégiques sont les localisations et les prix)

Le modèle de hotelling : contributions et limites (application au cas où les variables stratégiques sont les localisations et les prix)

Section I: Introduction I/objet de cette étude consiste à partir du modèle de concurrence spatiale de I I O T E L L I N G (1929), à rechercher l'existence d'une solution d équilibré de prix dès lors que les vendeurs différencient leurs produits. O n envisagera successivement le cas où les variables stratégiques sont les localisations et les prix. Dans ce modèle, les produits sont homogènes, ils sont néanmoins différenciés, non par leur qualité, mais par leur prix de vente. La différenciation des produits vient, de ce qu'ils intéressent da­ vantage les consommateurs qui se trouvent à proximité de ces derniers, par le fait m ê m e d'une di­ minution du coût de transport nécessaire pour réaliser l'achat.
En savoir plus

24 En savoir plus

Show all 10000 documents...