We assume **circuits** are well-formed: they do not contain cycles, and the domain of each of **a** function’s input pin in- cludes at least all of the image of the function’s output pin to which it is connected. **Function** **circuits** differ from imper- ative programming by the absence of explicit variables and loops, and in this sense are **a** closer parent to the functional programming paradigm. While not the purpose of this paper, it is relatively easy to convince oneself that, with **a** suitably defined set of primitive functions, **circuits** can represent **a** wide range of computations over various types of data. **For** example, the computation pipelines of the BeepBeep event stream processing engine [ 8 ], composed of **a** graph of inde- pendent units called “processors”, can be modeled as **function** **circuits**. Similarly, such **a** model can accommodate **a** variety of other functions, such as Boolean connectives, quantifiers, path selectors in **a** tree structure, and so on. We can hence consider it suitably **generic** to encompass the **explainability** use cases described in the beginning.

En savoir plus
The physical systems studied triggered the development of various proposals. **For** example, in molecular studies, various proposals were designed, based on molecular dynamics [6, 3], on internal coordinates (dihedral angles) [9, 10], or variants [11, 4]. In **a** related vein, WL was also used to perform numerical integration. Multidimensional integral may be approximated by **a** discrete sum of **function** values multiplies by the measure of points achieving **a** given value [12]. (Note that the **function** value plays the role of the density of states of **a** physical system.) Such calculations are of special interest to study convergence properties, since exact values (**for** the whole integral or the density of states) make it possible to scrutinize the convergence properties [13]. In this context, it was observed that bin width introduce another kind of saturation error, which call **for** **a** refined treatment of **function** values [13].

En savoir plus
has attracted **a** lot of attention in machine learning recently, see, e.g., [13, 14, 19, 25, 35, 42, 53] **for** incremental algorithms and [1, 26, 30, 33, 47, 55, 56] **for** accelerated variants.
Yet, as noted in [8], one is typically not interested in the minimization of the empirical risk—that is, **a** finite sum of functions—with high precision, but instead, one should focus on the expected risk involving the true (unknown) data distribution. When one can draw an infinite number of samples from this distribution, the true risk (1) may be minimized by using appropriate stochastic optimization techniques. Unfortunately, fast methods designed **for** deterministic objectives would not apply to this setting; methods based on stochastic approximations admit indeed optimal “slow” rates that are typically O(1/ √ k) **for** convex functions and O(1/k) **for** strongly convex ones, depending on the exact assumptions made on the problem, where k is the number of noisy gradient evaluations [38]. Better understanding the gap between deterministic and stochastic optimization is one goal of this paper. Specifically, we are interested in Nesterov’s acceleration of gradient-based approaches [39, 40]. In **a** nutshell, gradient descent or its proximal variant applied to **a** µ-strongly convex L-smooth **function** achieves an exponential convergence rate O((1 − µ/L) k ) in the worst case in **function**

En savoir plus
Figure 4: Simulation tree with two levels: trajectories and tap times
The depth of the simulation tree can be increased, by adding the ”tap” **function** which triggers special behavior in some types of birds. **For** each shooting angle, the agent is able to choose **a** time to perform the tap which consists in another array of possibilities, therefore adding another level in the simulation tree (Figure 4). This is done similarly to the first level, through simulation duplication. This method proves to be computationally inexpensive, as the initial ob- ject recognition and mapping are not remade, but their results are copied and simulated in another way (eg. different angles or tap times, but with the same initial scene configuration). In the simulations, **a** special bird behavior triggers similar ef- fects as in the real game, **for** example blue birds spawn new instances at different angles, yellow birds gain **a** speed boost, black birds explode and white birds shoot projectiles down- wards. This way, the model can be modified to better fit the game without changing the decision making process.

En savoir plus
Abstract—To design faster and more energy-efficient systems, numerous inexact arithmetic operators have been proposed, generally obtained by modifying the logic structure of con- ventional **circuits**. However, as the quality of service of an application has to be ensured, these operators need to be precisely characterized to be usable in commercial or real-life applications. The characterization of the error induced by inexact operators is commonly achieved with exhaustive or stochastic bit-accurate gate-level simulations. However, **for** high bit-widths, the time and memory required **for** such simulations become prohibitive. To overcome these limitations, **a** new characterization **framework** **for** inexact operators is proposed. The proposed **framework** characterizes the error induced by inexact operators in terms of mean error distance, error rate and maximum error distance, allowing to completely define the error probability mass **function**. By exploiting statistical properties of the approximation error, the number of simulations needed **for** precise characterization is minimized. From user-defined confidence requirements, the proposed method computes the minimal number of simulations to obtain the desired accuracy on the characterization **for** the error rate and mean error distance. The maximum error distance value is then extracted from the simulated samples using the extreme value theory. **For** 32-bit adders, the proposed method reduces the number of simulations needed up to **a** few tens of thousands points.

En savoir plus
The recent enthusiasm **for** artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learn- ing algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The orig- inality of this paper is to combine technical, legal and economic aspects of **explainability** to develop **a** **framework** **for** defining the ”right” level of explain- ability in **a** given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory **framework**. This step will help characterize the operational and legal needs **for** explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as **function** of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.

En savoir plus
convex hull of the lifted vertices (Figure 8).
Note that after Delaunay refinement the final mesh may have **a** non-uniform density of ver- tices, reflecting **a** non-uniform sizing field that is the pointwise minimum between **a** (possibly non-uniform) user-defined sizing field and the local feature size of the meshed domain. To pre- serve this non-uniform density throughout the optimization process, the Lloyd and ODT energy integrals are computed using **a** weighted version of the error, where the weights are locally es- timated from the average length of edges incident to each vertex of the mesh after refinement. **For** both optimizers, at each optimization step, closed form formulas provide the new location of the mesh vertices as **a** **function** of the current mesh vertices and connectivity [30, 1]. Each optimization step computes the new position of all mesh vertices, relocates them and updates the Delaunay triangulation as well as both restricted Delaunay triangulations.

En savoir plus
These previous works focus on modeling the IEEE 802.15.4 standard, and therefore do not aim to provide **generic** analytical
frameworks. To the best of our knowledge, only **a** few **generic** models were proposed in the literature. Vuran et al. [20] proposed **a** theoretical **framework** to exploit spatial correlation of observed events between sensor nodes on the MAC layer to reduce unnecessary data transmissions. In [21], the authors analyzed the duty-cycle, energy efficiency and latency of **a** handful of MAC protocols in the context of low data- rate WSNs regarding various network parameters such as the network density and the transceiver. If the proposed traffic and radio models are **generic**, the latency and energy models are specific to each MAC, making the proposed approach hard to extend to new protocols. Asudeh et al. [22] proposed **a** selection **framework** to choose the appropriate protocol that satisfies the requirements **for** **a** given context defined by **a** set of input parameters. Three categories of protocols (preamble sampling, common active period and scheduled) are defined and it is assumed that protocols in the same category have similar performance characteristics. The authors defined **a** combined performance **function** that relates different metrics (delay, energy consumption. . . ) into **a** single scalar measure by scaling appropriately each metric. The aim of this performance **function** is to quantify the performance of each protocol to choose the most appropriate one regarding particular context and application requirements. However, the purpose of our work is not to provide **a** selection algorithm, but an analytical **framework** to evaluate different MAC schemes.

En savoir plus
kf P W L primal − f k L 1 , measures the volume enclosed between the unit paraboloid of R 4 and
the lower boundary of the convex hull of the lifted vertices (Figure 8).
Note that after Delaunay refinement the final mesh may have **a** non-uniform den- sity of vertices, reflecting **a** non-uniform sizing field that is the pointwise minimum between **a** (possibly non-uniform) user-defined sizing field and the local feature size of the meshed domain. To preserve this non-uniform density throughout the optimization process, the Lloyd and ODT energy integrals are computed using **a** weighted version of the error, where the weights are locally estimated from the average length of edges incident to each vertex of the mesh after refinement. **For** both optimizers, at each op- timization step, closed form formulas provide the new location of the mesh vertices as **a** **function** of the current mesh vertices and connectivity [Du and Wang 2003; Alliez et al. 2005]. Each optimization step computes the new position of all mesh vertices, relocates them and updates the Delaunay triangulation as well as both restricted De- launay triangulations.

En savoir plus
Figure 1 – The meta-model **for** rule-based security formalisms
The three bottom classes (Policy, Rule and Element) on the diagram in Figure 1 allow defining actual security policies using **a** formalism defined with the three top classes. The class Policy is the root class to instantiate in order to create **a** security policy. Each policy must have **a** type (which is an instance of class PolicyType discussed in the previous paragraph) and contains elements and rules. The type of **a** policy constrains the types of elements and rules it can contain. Each element has **a** type which must belong to the element types of the policy type. If the hierarchy property of the element type is true, then the element can contain children of the same type as itself. This is used **for** example to define hierarchies of roles in OrBAC. Finally, rules can be defined by instantiating the Rule class. Each rule has **a** type which should again belong to the policy type. Each rule has **a** set of parameters which types should match the types of the parameters of the type of the rule.

En savoir plus
VIII. C ONCLUSION
Even if several research works have tackled the problem of software adaptation, now crucial due to the constant evolution of the execution environments, very few con- sider the heterogeneous and distributed aspects of these environments, as well as the various types of possible adaptations. We propose **a** **generic** **framework**, that, due to its fine grain decomposition into functionalities, can manage different levels of adaptation (service, application, SOA, infrastructure) and cope with dynamically defined adaptation actions **for** parametric, functional, behavioural, structural, environmental adaptation. We have in particular designed cooperation mechanisms to coordinate distributed analysis and decision, on the fly planning of adaptation actions, using abstract events and abstract actions. Examples of possible specialisations of our **framework** have been given and **a** first implementation **for** OSGi realized. Our current work concerns the implementation on heterogeneous service oriented platforms and on top of cloud infrastructure running an OS capable of virtualisation of resources such as XtreemOS [21].

En savoir plus
* Correspondence: sebastien.lefevre@irisa.fr
Received: 30 July 2018; Accepted: 27 January 2019; Published: 30 January 2019
Abstract: The Geographic Object-Based Image Analysis (GEOBIA) paradigm relies strongly on the segmentation concept, i.e., partitioning of an image into regions or objects that are then further analyzed. Segmentation is **a** critical step, **for** which **a** wide range of methods, parameters and input data are available. To reduce the sensitivity of the GEOBIA process to the segmentation step, here we consider that **a** set of segmentation maps can be derived from remote sensing data. Inspired by the ensemble paradigm that combines multiple weak classifiers to build **a** strong one, we propose **a** novel **framework** **for** combining multiple segmentation maps. The combination leads to **a** fine-grained partition of segments (super-pixels) that is built by intersecting individual input partitions, and each segment is assigned **a** segmentation confidence score that relates directly to the local consensus between the different segmentation maps. Furthermore, each input segmentation can be assigned some local or global quality score based on expert assessment or automatic analysis. These scores are then taken into account when computing the confidence map that results from the combination of the segmentation processes. This means the process is less affected by incorrect segmentation inputs either at the local scale of **a** region, or at the global scale of **a** map. In contrast to related works, the proposed **framework** is fully **generic** and does not rely on specific input data to drive the combination process. We assess its relevance through experiments conducted on ISPRS 2D Semantic Labeling. Results show that the confidence map provides valuable information that can be produced when combining segmentations, and fusion at the object level is competitive w.r.t. fusion at the pixel or decision level.

En savoir plus
cement.
Plastic pan: See pitch pocket.
Plasticizer (Plastifiant): **A** plasticizer is **a** material, frequently "solvent-like",
incorporated in plastic or rubber to increase its ease of workability, flexibility or extensibility. May be monomeric liquids (phthalate esters), low molecular weight liquid polymers (polyesters) or rubbery high polymers (EVA). Adding the plasticizer may lower the melt viscosity, the temperature of the second order transition, or the elastic modulus of the polymer. The most important use of plasticizers is with PVC where the choice of plasticizer will dictate under what conditions the membrane may be used.

En savoir plus
The upper ontology represents the **generic** level and describes the general characteristics of the context entities that are common to all business areas. Our goal is to define **a** context model **for** BPM, we have identified **a** minimum set of context entities, e.g. environment, and context elements, e.g. is located at (see Figure 2), that we consider to be relevant to all business processes and business fields. We have identified the context elements that are related to the actor, to the process, to resources and to the business environment that seem essential **for** the representation of the context in BPM. Context entities and context elements that we suggest can be extended according to the business needs of the organization. Figure 2 shows the upper ontology that defines the set of concepts currently used in business processes including **for** instance the following context entities: Actor, Organization, Process, etc. Each of these context entities is associated with contextual relationships allowing to express its relationships with the other context entities.

En savoir plus
able, then the fragment Σ2 of CML (L) is decidable.
The idea of the proof (see Appendix B) is to reduce the satisfiability problem of Σ2 to the satisfiability problem of Σ0 formulas (which are formulas in the color logic L). We proceed as follows: we prove first that the fragment Σ2 has the small model property, i.e., every satisfiable formula ϕ in Σ2 has **a** model of **a** bounded size (where the size is the number of tokens in each place). This bound corresponds actually to the number of existentially quantified token variables in the formula. Notice that this fact does not lead directly to an enumerative decision procedure **for** the satisfiability problem since the number of models of **a** bounded size is infinite in general (due to infinite color domains). Then, we use the fact that over **a** finite model, universal quantifications in ϕ can be transformed into finite conjunctions, in order to build **a** formula ϕ b in Σ1 which is satisfiable if and only if the original formula ϕ is satisfiable. Actually, b ϕ defines precisely the upward-closure of the set of markings defined by ϕ (w.r.t. the inclusion ordering between sets of colored markings, extended to vectors of places). Finally, it can be shown that the Σ1 formula b ϕ is satisfiable if and only if the Σ0 obtained by transforming existential quantification over token into existential quantification over colors is decidable.

En savoir plus
2 Model **for** SBPs elasticity
We are interested in this paper in modelling elasticity of SBPs. **A** SBP is **a** busi- ness process that consists in assembling **a** set of elementary IT-enabled services. These services realise the business activities of the considered SBP. Assembling services into **a** SBP can be ensured using any appropriate service composition specifications (e.g. BPEL). Elasticity of **a** SBP is the ability to duplicate or con- solidate as many instances of the process or some of its services as needed to handle the dynamic of received requests. Indeed, we believe that handling elas- ticity does not only operate at the process level but it should operate at the level of services too. It is not necessary to duplicate all the services of **a** considered SBP while the bottleneck comes from some services of the SBP.

En savoir plus
Theorem 5.12. The distribution C is rationally integrable: there exists τ independent
rational first integrals **for** C.
We shall give two proofs of this result. The first one is **a** consequence of general facts about algebraic actions of algebraic groups. The second one is an algorithmic one, and furthermore proves that we can choose these first integrals in the ring C(**a** 1 , **a** 2 )[**a** 3 , · · · **a** p−3 ].

The large variety of data formats also increases the diversity of reconstruction methods, particularly in PET. The reduction of individual detection elements’ size, Time of Flight (TOF) measurements, as well as the increasing use of dynamic studies (**for** motion correction or tracer kinetics based analysis) all contribute to larger and sparser histogrammed data sets. As **a** consequence, the use of list-mode data (Snyder & Politte 1983, Parra & Barrett 1998) as **a** direct input to reconstruction algorithms has gained much interest in PET (Yan et al 2012). List-mode data provide access to the initial measurement precision in terms of spatial and temporal resolution, but they are not compatible with some reconstruction algorithms, such as fast analytical algorithms or algorithms with specific optimizations **for** sinogram data sets (Slambrouck et al 2015). The multitude of data formats and reconstruction methods makes the development of **generic** codes difficult and causes practical issues **for** assessing, disseminating, and comparing new techniques. Often, the use of **a** new reconstruction technique is restricted to **a** given imaging modality and data format, even if in principle it could be compatible with other modalities or data formats. When the same algorithm is implemented in several contexts, implementation details differ more or less and produce results that are not strictly comparable. PET, SPECT and CT use similar components **for** tomographic reconstruction (e.g. projection operators, iterative optimizations, geometry descriptions). The wide use of iterative methods in PET an SPECT and the re-emergence of iterative reconstruction in CT (Beister et al 2012) suggest that reconstruction software **for** these three modalities can be efficiently integrated into **a** unified iterative reconstruction **framework**.

En savoir plus