2 ETIS UMR 8051, CY University, ENSEA, CNRS, F-95000, Cergy, France
Abstract. In wireless communication, the full potential of multiple-input multiple- output (MIMO) arrays can only be realized through optimization of their trans- mission parameters. Distributed solutions dedicated to that end include iterative optimization algorithms involving the computation of the gradient of a given ob- jective function, and its dissemination among the network users. In the context of large-scale MIMO, however, computing and conveying large arrays of function derivatives across a network has a prohibitive cost to communication standards. In this paper we show that multi-user MIMO networks can be optimized with- out using any derivative information. With focus on the throughput maximization problem in a MIMO multiple access channel, we propose a “derivative-free” opti- mization methodology relying on very little feedback information: a single func- tion query at each iteration. Our approach integrates two complementary ingre- dients: exponential learning (a derivative-based expression of the mirror descent algorithm with entropic regularization), and a single-function-query gradient esti- mation technique derived from a classic approach to derivative-freeoptimization. Keywords: derivative-freeoptimization · zeroth-order optimization · exponen- tial learning · MIMO systems · throughput maximization· SPSA.
Model personalization is a key aspect for biophysical models to impact clinical practice, and cardiac contractility personalization from medical images is a major step in this direction. Existing gradient-based optimization approaches show promising results of identifying the maximum contractility from images, but the contraction and relaxation rates are not accounted for. A main reason is the limited choices of objective functions when their gradients are required. For complicated cardiac models, analytical evalua- tions of gradients are very difficult if not impossible, and finite difference approximations are computationally expensive and may introduce numerical difficulties. By removing such limitations with derivative-freeoptimization, we found that a velocity-based ob- jective function can properly identify regional maximum contraction stresses, contraction rates, and relaxation rates simultaneously with intact model complexity. Experiments on synthetic data show that the parameters are better identified using the velocity-based objective function than its position-based counterpart, and the proposed framework is insensitive to initial parameters with the adopted derivative-freeoptimization algorithm. Experiments on clinical data show that the framework can provide personalized contractility parameters which are consistent with the underlying physiologies of the patients and healthy volunteers.
Physics-Based Sound Simulation Models and Surrogate-Assisted Derivative-FreeOptimization
This paper presents a method for design optimization of brass wind instruments. The shape of a trumpet’s bore is optimized to improve intonation using a physics-based sound simulation model. This physics-based model consists of an acoustic model of the resonator, a mechanical model of the excitator, and a model of the coupling between the excitator and the resonator. The harmonic balance technique allows the computation of sounds in a permanent regime, representative of the shape of the resonator according to control parameters of the virtual musician. An optimization problem is formulated in which the objective function to be minimized is the overall quality of the intonation of the different notes played by the instrument. The design variables are the physical dimensions of the resonator. Given the computationally expensive function evaluation and the unavailability of gradients, a surrogate- assisted optimization framework is implemented using the mesh adaptive direct search algorithm (MADS). Surrogate models are used both to obtain promising candidates in the search step of MADS and to rank-order additional candidates generated by the poll step of MADS. The physics-based model is then used to determine the next design iterate. Two examples (with two and ﬁve design optimization variables) demonstrate the approach. Results show that signiﬁcant improvement of intonation can be achieved at reasonable computational cost. Finally, the perspectives of this approach for computer-aided instrument design are evoked, considering optimization algorithm improvements and problem formulation modiﬁcations using for instance different design variables, multiple objectives and constraints or objective functions based on the instrument’s timbre.
5 C O N C L U S I O N S
A 𝜅−𝜀 model has been introduced with the aim of solving the Reynolds averaged Navier–Stokes equations for the SOL plasma. A method to tune the free parameters of the model from experimental data has been presented as well. A summary of this work is sketched in Figure 8. The green line represents the feedback loop between experimental data and SolEdge2D free parameters (anomalous diffusivities). The red line loop represents the strategy consisting of optimizing the 𝜅−𝜀 free parameters to recover diffusivity profiles and perform a more consistent comparison between simulation and experimental data, at the outer mid-plane as well as at the outer target. This 𝜅−𝜀 procedure allows us to reduce the original number of free parameters and capture the poloidal asymmetries of turbulence, which is a promising first step towards predictive transport models for tokamak plasmas.
This chapter aims to propose answers to these questions, or part of an answer, whenever it is possible. It is focused on derivatives markets 2 , and more specifically on energy derivative markets. In a first section, we give an overview of energy derivative markets today. It is very important to understand that, like every derivative markets, energy markets appeared when the need for protection against fluctuations in prices become really important. In the second section, we explain how energy derivative markets are organized today, and how this organization evolves since a few years. More precisely, there is a progressive fusion of organized and over-the-counter markets, which has a lot of consequences on their functioning. In the third section, we present the economic functions of derivative markets. Indeed, such markets do not only allow the management of prices risks. They also authorize speculation and arbitrage –it is important to know whether this is a good or a bad thing – price and volatility discovery – which is useful not only for market participants, but also for worldwide producers and consumers – and transactional efficiency – creating thus benchmarks that are used as a reference for many transactions. In the fourth section, we examine the influence of derivative markets on physical markets, systemic risk and we give our opinion concerning the idea that derivative markets might be intrinsically dangerous.
3.1 Derivative Half Gaussian Kernels
Gaussian kernels are used in a large part of Shock filter for their efficiency in edge detection. Nevertheless weaknesses can be noticed at level of corners and small objects present in the image (as stated above about shock filters involving Gaussians). Edge detection techniques using elongated kernels are efficient at detecting large linear structures correctly , but small structures are consid- ered to be noise and their edges are not extracted. Consequently, the accuracy of the detected edge points decreases strongly at corner points and for non-straight object contour parts. To bypass this undesirable effect, an anisotropic edge de- tection method is developed in  and deeper evaluated in . Indeed, the proposed technique is able to detect crossing edges and corners due to two elon- gated and oriented filters in two different directions. The main idea is to “cut” the anisotropic Gaussian kernel using a Heaviside function (see Fig. 2(c)). The half kernel (HK) can be implemented using Gaussian and its first derivative:
These comparative results show a significant improvement of performances using the proposed clustering/black-box optimization method. But once again, they should be considered with caution because the surface is rather small and the maximum scallop height values are relatively high. It would be interesting to compare both methods on parts that are more representative of industrial ones, and using parameters commonly used in industry.
We investigate the estimation of the density-weighted average derivative from biased data. An estimator integrating a plug-in approach and wavelet projections is constructed. We prove that it attains the parametric rate of convergence 1/𝑛 under the mean squared error.
 M. Uhrig-Homburg, “Futures price dynamics of CO2 emission certificates - an empirical analysis”, University of Karlsruhe, 2007, unpulished
 W. Hu, “Calibration of multivariate generalized hyperbolic distributions using the em algorithm, with applications in risk management, portfolio optimization and portfolio credit risk”, Book, first ed., Florida State University, Florida, 2005.
This article provides insights for the literature on commodity derivative markets, in several directions. First, it proposes a new empirical implication of the Samuelson Hypothesis and suggests the proper methodology to use it. Second, it enhances the knowledge about the dynamics of the futures prices in the four most important electricity futures markets, worldwide. Lastly, it confirms the links between this "new" commodity and other storable commodities through the notion of "indirect storability" and suggests the presence of directionality effects from the inputs to the electricity prices.This is interesting, as most of the models of the term structure of commodity prices rely on the storage theory (see for example Brennan (1958), Brennan and Schwartz (1985), and Cortazar and Schwartz (2003)).
Keywords: Caputo derivative, Banach fixed point theorem, Fractional di fferential equations. 2010 MSC: 34A34, 34B10.
The theory of fractional di fferential equations has excited in recent years a considerable interest both in mathematics and in applications, (see ( Bengrine & Dahmani , 2012 ; Delbosco & Rodino ,
that it is separated from the control software of the engine dyno, and that it offers optimum setting within some minutes. ESCHER requires very few configura- tion: engineers select only the constraints to be met, and parameters to be optimized, then the tool can start optimizing the current operating point. This way, the long tests sequences involved in the usual full scan of the parameters, as well as the heavy processing of the huge amount of acquired data, are avoided. It is of course necessary to set the bench on its con- ventional aspects. ESCHER performs optimization within some minutes depending on the number of pa- rameters to modify, and the number of responses to be optimized.
Global sensitivity analysis is used to quantify the influence of input variables on a numerical model output. Sobol’ indices are now classical sensitivity measures. However their estimation requires a large number of model eval- uations, especially when interaction effects are of interest. Derivative-based global sensitivity measures (DGSM) have recently shown their efficiency for the identification of non-influential inputs. In this paper, we define crossed DGSM, based on second-order derivatives of model output. By using a L 2 - Poincar´e inequality, we provide a crossed-DGSM based maximal bound for the superset importance (i.e. total Sobol’ indices of an interaction between two inputs). In order to apply this result, we discuss how to estimate the Poincar´e constant for various probability distributions. Several analytical and numerical tests show the performance of the bound and allow to develop a generic strategy for interaction screening.
One way to approach the question of whether there are non-derivative partial reasons of any kind is to give an account of what partial reasons are, and then to consider whether there are such reasons. If there are, then it is at least possible that there are partial reasons of friendship. It is this approach that will be taken here, and it produces several interesting results. The first is a point about the structure of partial reasons. It is at least a necessary condition of a reason’s being partial that it has an explicit relational component. This component, technically, is a rela- tum in the reason relation that itself is a relation between the person to whom the reason applies and the person whom the action for which there is a reason concerns. The second conclusion of the paper is that this relational component is also required for a number of types of putatively impartial reasons. In order to avoid trivialising the distinction between partial and impartial rea- sons, some further sufficient condition must be applied. Finally, there is some prospect for a way of distinguishing between impartial reasons that contain a relational component and partial rea- sons, but that this approach suggests that the question of whether ethics is partial or impartial will be settled at the level of normative ethical discourse, or at least not at the level of discourse about the nature of reasons for action.
cells whose value is close to f (x ? ), and determines how this number evolves with the depth h. The smaller d, the more accurate the optimization should be.
New challenges Adaptations to different data complexities As did Bubeck and Slivkins ( 2012 ),
Seldin and Slivkins ( 2014 ), and De Rooij et al. ( 2014 ) in other contexts, we design algorithms that demonstrate near-optimal behavior under data-generating processes of different nature, obtaining the best of all these possible worlds. In this paper, we consider the two following data complexities for which we bring new improved adaptation.
We have seen, in section 3.1.2, that optimization criteria are represented by Criterion Agents. Each of these agents computes a critical level, thanks to a crit- icality function. These functions are defined by the user. Having the variation range of the variable on which a criterion is applied helps to get cost-efficient functions (otherwise, the use of the exponential func- tion is required). A default polynomial function for each type of criterion is proposed, and implemented in ESCHER. With these functions, the user only have to select the type of criterion (setpoint, threshold, or optimization), and to specify a value (for a setpoint, or a threshold) and a direction (for a thershold, or an optimization).
In order to understand the stringy origin of more general higher derivative couplings in N = 2 theories, one needs to go beyond the purely gravitational couplings in ten dimensions. In the NS-NS sector of string theory, H 2 R 3 couplings are specified by a five-point function [ 13 ]. Direct amplitude calculations beyond this order are exceedingly difficult, but recent progress in classification of string backgrounds using the generalised complex geometry and T-duality covariance provide rather powerful constrains on the structure of the quantum corrections in the effective actions. A partial result for the six-point function, obtained recently, together with T-duality constraints and the heterotic/type II duality beyond leading order, allows to recover the ten-dimensional perturbative action almost entirely [ 14 ] (the few yet unfixed terms mostly vanish in CY backgrounds and hence are not relevant for the current project). The eleven-dimensional lift of the modified coupling leads to the inclusion of the M-theory four-form field strength; the subsequent reduction on a non- trivial KK monopole background allows to incorporate the full set of RR fields in the one- loop eight-derivative couplings. This knowledge will be crucially used for obtaining the relevant four-dimensional N = 2 couplings.
Xi Yin †
Jefferson Physical Laboratory, Harvard University, Cambridge, Massachusetts 02138, USA (Received 22 March 2015; published 31 August 2015)
We study supersymmetry constraints on higher derivative deformations of type IIB supergravity by consideration of superamplitudes. Combining constraints of on-shell supervertices and basic results from string perturbation theory, we give a simple argument for the nonrenormalization theorem of Green and Sethi, and some of its generalizations.
Since P. nivalis is used in traditional medicine to treat stomach ulcers, gastritis, internal pain and sensation of burning stomach and since Marilone A, a phthalide derivative isolated from the sponge-derived fungus Stachylidium sp , and some chromenes were shown to have antiplasmodial activity against Plasmodium berghei (Oshimi et al. 2009 ; Almeida et al. 2011 ) and strong antiprotozoal activity (Harel et al. 2013 ), compounds 1–3 were evaluated for their anti-H. pylori and anti-P. falciparum activities.