Haut PDF A generic explainability framework for function circuits

A generic explainability framework for function circuits

A generic explainability framework for function circuits

We assume circuits are well-formed: they do not contain cycles, and the domain of each of a function’s input pin in- cludes at least all of the image of the function’s output pin to which it is connected. Function circuits differ from imper- ative programming by the absence of explicit variables and loops, and in this sense are a closer parent to the functional programming paradigm. While not the purpose of this paper, it is relatively easy to convince oneself that, with a suitably defined set of primitive functions, circuits can represent a wide range of computations over various types of data. For example, the computation pipelines of the BeepBeep event stream processing engine [ 8 ], composed of a graph of inde- pendent units called “processors”, can be modeled as function circuits. Similarly, such a model can accommodate a variety of other functions, such as Boolean connectives, quantifiers, path selectors in a tree structure, and so on. We can hence consider it suitably generic to encompass the explainability use cases described in the beginning.
En savoir plus

5 En savoir plus

A generic software framework for Wang-Landau type algorithms

A generic software framework for Wang-Landau type algorithms

The physical systems studied triggered the development of various proposals. For example, in molecular studies, various proposals were designed, based on molecular dynamics [6, 3], on internal coordinates (dihedral angles) [9, 10], or variants [11, 4]. In a related vein, WL was also used to perform numerical integration. Multidimensional integral may be approximated by a discrete sum of function values multiplies by the measure of points achieving a given value [12]. (Note that the function value plays the role of the density of states of a physical system.) Such calculations are of special interest to study convergence properties, since exact values (for the whole integral or the density of states) make it possible to scrutinize the convergence properties [13]. In this context, it was observed that bin width introduce another kind of saturation error, which call for a refined treatment of function values [13].
En savoir plus

25 En savoir plus

A Generic Acceleration Framework for Stochastic Composite Optimization

A Generic Acceleration Framework for Stochastic Composite Optimization

has attracted a lot of attention in machine learning recently, see, e.g., [13, 14, 19, 25, 35, 42, 53] for incremental algorithms and [1, 26, 30, 33, 47, 55, 56] for accelerated variants. Yet, as noted in [8], one is typically not interested in the minimization of the empirical risk—that is, a finite sum of functions—with high precision, but instead, one should focus on the expected risk involving the true (unknown) data distribution. When one can draw an infinite number of samples from this distribution, the true risk (1) may be minimized by using appropriate stochastic optimization techniques. Unfortunately, fast methods designed for deterministic objectives would not apply to this setting; methods based on stochastic approximations admit indeed optimal “slow” rates that are typically O(1/ √ k) for convex functions and O(1/k) for strongly convex ones, depending on the exact assumptions made on the problem, where k is the number of noisy gradient evaluations [38]. Better understanding the gap between deterministic and stochastic optimization is one goal of this paper. Specifically, we are interested in Nesterov’s acceleration of gradient-based approaches [39, 40]. In a nutshell, gradient descent or its proximal variant applied to a µ-strongly convex L-smooth function achieves an exponential convergence rate O((1 − µ/L) k ) in the worst case in function
En savoir plus

25 En savoir plus

Towards A Theory-Of-Mind-Inspired Generic Decision-Making Framework

Towards A Theory-Of-Mind-Inspired Generic Decision-Making Framework

Figure 4: Simulation tree with two levels: trajectories and tap times The depth of the simulation tree can be increased, by adding the ”tap” function which triggers special behavior in some types of birds. For each shooting angle, the agent is able to choose a time to perform the tap which consists in another array of possibilities, therefore adding another level in the simulation tree (Figure 4). This is done similarly to the first level, through simulation duplication. This method proves to be computationally inexpensive, as the initial ob- ject recognition and mapping are not remade, but their results are copied and simulated in another way (eg. different angles or tap times, but with the same initial scene configuration). In the simulations, a special bird behavior triggers similar ef- fects as in the real game, for example blue birds spawn new instances at different angles, yellow birds gain a speed boost, black birds explode and white birds shoot projectiles down- wards. This way, the model can be modified to better fit the game without changing the decision making process.
En savoir plus

8 En savoir plus

Adaptive Simulation-based Framework for Error Characterization of Inexact Circuits

Adaptive Simulation-based Framework for Error Characterization of Inexact Circuits

Abstract—To design faster and more energy-efficient systems, numerous inexact arithmetic operators have been proposed, generally obtained by modifying the logic structure of con- ventional circuits. However, as the quality of service of an application has to be ensured, these operators need to be precisely characterized to be usable in commercial or real-life applications. The characterization of the error induced by inexact operators is commonly achieved with exhaustive or stochastic bit-accurate gate-level simulations. However, for high bit-widths, the time and memory required for such simulations become prohibitive. To overcome these limitations, a new characterization framework for inexact operators is proposed. The proposed framework characterizes the error induced by inexact operators in terms of mean error distance, error rate and maximum error distance, allowing to completely define the error probability mass function. By exploiting statistical properties of the approximation error, the number of simulations needed for precise characterization is minimized. From user-defined confidence requirements, the proposed method computes the minimal number of simulations to obtain the desired accuracy on the characterization for the error rate and mean error distance. The maximum error distance value is then extracted from the simulated samples using the extreme value theory. For 32-bit adders, the proposed method reduces the number of simulations needed up to a few tens of thousands points.
En savoir plus

14 En savoir plus

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learn- ing algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The orig- inality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the ”right” level of explain- ability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.
En savoir plus

66 En savoir plus

CGALmesh: a Generic Framework for Delaunay Mesh Generation

CGALmesh: a Generic Framework for Delaunay Mesh Generation

convex hull of the lifted vertices (Figure 8). Note that after Delaunay refinement the final mesh may have a non-uniform density of ver- tices, reflecting a non-uniform sizing field that is the pointwise minimum between a (possibly non-uniform) user-defined sizing field and the local feature size of the meshed domain. To pre- serve this non-uniform density throughout the optimization process, the Lloyd and ODT energy integrals are computed using a weighted version of the error, where the weights are locally es- timated from the average length of edges incident to each vertex of the mesh after refinement. For both optimizers, at each optimization step, closed form formulas provide the new location of the mesh vertices as a function of the current mesh vertices and connectivity [30, 1]. Each optimization step computes the new position of all mesh vertices, relocates them and updates the Delaunay triangulation as well as both restricted Delaunay triangulations.
En savoir plus

35 En savoir plus

Towards a Generic Framework for Black-box Explanation Methods (Extended Version)

Towards a Generic Framework for Black-box Explanation Methods (Extended Version)

Explainability has generated increased interest during the last decade because the most accu- rate ML techniques often lead to opaque ADS and opacity is a major source of mistrust. Indeed, even if explanations are not a panacea, they can play a key role, not only to enhance trust in the system, but also to allow its users to better understand its outputs and therefore to make a better use of it. In addition, they are necessary to make it possible to challenge the decisions resulting from an ADS. Explanations can take different forms, they can target different types of users (hereafter “explainees”) and different types of methods can be used to produce them. In this paper, we focus on a category of methods, called ”black-box”, that do not make any as- sumption of the availability of the code of the ADS or its implementation techniques. The only assumption is that input data can be provided to the ADS and its output data can be observed. Explainability is a fast growing research area and many papers have been published on this topic during the last years. These papers define methods to produce different types of explana- tions in different ways but they also share a number of features. The main goal of this paper is to bring to light a common structure for Black-box Explanation Methods (BEM) and to define a generic framework allowing us to compare and classify different approaches. This framework con- sists of three components, called respectively Sampling, Generation and Interaction. The need to conceive an explanation as an interactive process rather than a static object has been argued in a very compelling way by several authors [ 18 , 19 , 20 ]. It must be acknowledged, however, that many contributions in the XAI community do not emphasize this aspect. Therefore, in view of space limitation, we do not discuss the precise form that explanations and interactions with the explainee can take, and focus on the Sampling and Generation components. We characterize these components formally and use them to build a taxonomy of explanation methods. We come back to the link with the Interaction component in the conclusion. Beyond its interest as a systematic presentation of the state of the art, we believe that this framework can also provide new insights for the design of new explanation systems. For example, it may suggest new com- binations of Sampling and Generation components or criteria to choose the most appropriate combination to produce a given type of explanation.
En savoir plus

32 En savoir plus

A Generic Framework for Modeling MAC Protocols in Wireless Sensor Networks

A Generic Framework for Modeling MAC Protocols in Wireless Sensor Networks

These previous works focus on modeling the IEEE 802.15.4 standard, and therefore do not aim to provide generic analytical frameworks. To the best of our knowledge, only a few generic models were proposed in the literature. Vuran et al. [20] proposed a theoretical framework to exploit spatial correlation of observed events between sensor nodes on the MAC layer to reduce unnecessary data transmissions. In [21], the authors analyzed the duty-cycle, energy efficiency and latency of a handful of MAC protocols in the context of low data- rate WSNs regarding various network parameters such as the network density and the transceiver. If the proposed traffic and radio models are generic, the latency and energy models are specific to each MAC, making the proposed approach hard to extend to new protocols. Asudeh et al. [22] proposed a selection framework to choose the appropriate protocol that satisfies the requirements for a given context defined by a set of input parameters. Three categories of protocols (preamble sampling, common active period and scheduled) are defined and it is assumed that protocols in the same category have similar performance characteristics. The authors defined a combined performance function that relates different metrics (delay, energy consumption. . . ) into a single scalar measure by scaling appropriately each metric. The aim of this performance function is to quantify the performance of each protocol to choose the most appropriate one regarding particular context and application requirements. However, the purpose of our work is not to provide a selection algorithm, but an analytical framework to evaluate different MAC schemes.
En savoir plus

14 En savoir plus

CGALmesh: a Generic Framework for Delaunay Mesh Generation

CGALmesh: a Generic Framework for Delaunay Mesh Generation

kf P W L primal − f k L 1 , measures the volume enclosed between the unit paraboloid of R 4 and the lower boundary of the convex hull of the lifted vertices (Figure 8). Note that after Delaunay refinement the final mesh may have a non-uniform den- sity of vertices, reflecting a non-uniform sizing field that is the pointwise minimum between a (possibly non-uniform) user-defined sizing field and the local feature size of the meshed domain. To preserve this non-uniform density throughout the optimization process, the Lloyd and ODT energy integrals are computed using a weighted version of the error, where the weights are locally estimated from the average length of edges incident to each vertex of the mesh after refinement. For both optimizers, at each op- timization step, closed form formulas provide the new location of the mesh vertices as a function of the current mesh vertices and connectivity [Du and Wang 2003; Alliez et al. 2005]. Each optimization step computes the new position of all mesh vertices, relocates them and updates the Delaunay triangulation as well as both restricted De- launay triangulations.
En savoir plus

27 En savoir plus

A Generic Metamodel For Security Policies Mutation

A Generic Metamodel For Security Policies Mutation

Figure 1 – The meta-model for rule-based security formalisms The three bottom classes (Policy, Rule and Element) on the diagram in Figure 1 allow defining actual security policies using a formalism defined with the three top classes. The class Policy is the root class to instantiate in order to create a security policy. Each policy must have a type (which is an instance of class PolicyType discussed in the previous paragraph) and contains elements and rules. The type of a policy constrains the types of elements and rules it can contain. Each element has a type which must belong to the element types of the policy type. If the hierarchy property of the element type is true, then the element can contain children of the same type as itself. This is used for example to define hierarchies of roles in OrBAC. Finally, rules can be defined by instantiating the Rule class. Each rule has a type which should again belong to the policy type. Each rule has a set of parameters which types should match the types of the parameters of the type of the rule.
En savoir plus

9 En savoir plus

Towards a Generic Context-Aware Framework for Self-Adaptation of Service-Oriented Architectures

Towards a Generic Context-Aware Framework for Self-Adaptation of Service-Oriented Architectures

VIII. C ONCLUSION Even if several research works have tackled the problem of software adaptation, now crucial due to the constant evolution of the execution environments, very few con- sider the heterogeneous and distributed aspects of these environments, as well as the various types of possible adaptations. We propose a generic framework, that, due to its fine grain decomposition into functionalities, can manage different levels of adaptation (service, application, SOA, infrastructure) and cope with dynamically defined adaptation actions for parametric, functional, behavioural, structural, environmental adaptation. We have in particular designed cooperation mechanisms to coordinate distributed analysis and decision, on the fly planning of adaptation actions, using abstract events and abstract actions. Examples of possible specialisations of our framework have been given and a first implementation for OSGi realized. Our current work concerns the implementation on heterogeneous service oriented platforms and on top of cloud infrastructure running an OS capable of virtualisation of resources such as XtreemOS [21].
En savoir plus

7 En savoir plus

A Generic Framework for Combining Multiple Segmentations in Geographic Object-Based Image Analysis

A Generic Framework for Combining Multiple Segmentations in Geographic Object-Based Image Analysis

* Correspondence: sebastien.lefevre@irisa.fr Received: 30 July 2018; Accepted: 27 January 2019; Published: 30 January 2019    Abstract: The Geographic Object-Based Image Analysis (GEOBIA) paradigm relies strongly on the segmentation concept, i.e., partitioning of an image into regions or objects that are then further analyzed. Segmentation is a critical step, for which a wide range of methods, parameters and input data are available. To reduce the sensitivity of the GEOBIA process to the segmentation step, here we consider that a set of segmentation maps can be derived from remote sensing data. Inspired by the ensemble paradigm that combines multiple weak classifiers to build a strong one, we propose a novel framework for combining multiple segmentation maps. The combination leads to a fine-grained partition of segments (super-pixels) that is built by intersecting individual input partitions, and each segment is assigned a segmentation confidence score that relates directly to the local consensus between the different segmentation maps. Furthermore, each input segmentation can be assigned some local or global quality score based on expert assessment or automatic analysis. These scores are then taken into account when computing the confidence map that results from the combination of the segmentation processes. This means the process is less affected by incorrect segmentation inputs either at the local scale of a region, or at the global scale of a map. In contrast to related works, the proposed framework is fully generic and does not rely on specific input data to drive the combination process. We assess its relevance through experiments conducted on ISPRS 2D Semantic Labeling. Results show that the confidence map provides valuable information that can be produced when combining segmentations, and fusion at the object level is competitive w.r.t. fusion at the pixel or decision level.
En savoir plus

18 En savoir plus

A Generic Environment for Constructing Diagnostic Hierarchies

A Generic Environment for Constructing Diagnostic Hierarchies

cement. Plastic pan: See pitch pocket. Plasticizer (Plastifiant): A plasticizer is a material, frequently "solvent-like", incorporated in plastic or rubber to increase its ease of workability, flexibility or extensibility. May be monomeric liquids (phthalate esters), low molecular weight liquid polymers (polyesters) or rubbery high polymers (EVA). Adding the plasticizer may lower the melt viscosity, the temperature of the second order transition, or the elastic modulus of the polymer. The most important use of plasticizers is with PVC where the choice of plasticizer will dictate under what conditions the membrane may be used.
En savoir plus

92 En savoir plus

Towards a Generic Context Model for BPM

Towards a Generic Context Model for BPM

The upper ontology represents the generic level and describes the general characteristics of the context entities that are common to all business areas. Our goal is to define a context model for BPM, we have identified a minimum set of context entities, e.g. environment, and context elements, e.g. is located at (see Figure 2), that we consider to be relevant to all business processes and business fields. We have identified the context elements that are related to the actor, to the process, to resources and to the business environment that seem essential for the representation of the context in BPM. Context entities and context elements that we suggest can be extended according to the business needs of the organization. Figure 2 shows the upper ontology that defines the set of concepts currently used in business processes including for instance the following context entities: Actor, Organization, Process, etc. Each of these context entities is associated with contextual relationships allowing to express its relationships with the other context entities.
En savoir plus

11 En savoir plus

A Generic Framework for Reasoning about Dynamic Networks of Infinite-State Processes

A Generic Framework for Reasoning about Dynamic Networks of Infinite-State Processes

able, then the fragment Σ2 of CML (L) is decidable. The idea of the proof (see Appendix B) is to reduce the satisfiability problem of Σ2 to the satisfiability problem of Σ0 formulas (which are formulas in the color logic L). We proceed as follows: we prove first that the fragment Σ2 has the small model property, i.e., every satisfiable formula ϕ in Σ2 has a model of a bounded size (where the size is the number of tokens in each place). This bound corresponds actually to the number of existentially quantified token variables in the formula. Notice that this fact does not lead directly to an enumerative decision procedure for the satisfiability problem since the number of models of a bounded size is infinite in general (due to infinite color domains). Then, we use the fact that over a finite model, universal quantifications in ϕ can be transformed into finite conjunctions, in order to build a formula ϕ b in Σ1 which is satisfiable if and only if the original formula ϕ is satisfiable. Actually, b ϕ defines precisely the upward-closure of the set of markings defined by ϕ (w.r.t. the inclusion ordering between sets of colored markings, extended to vectors of places). Finally, it can be shown that the Σ1 formula b ϕ is satisfiable if and only if the Σ0 obtained by transforming existential quantification over token into existential quantification over colors is decidable.
En savoir plus

28 En savoir plus

A generic framework for service-based business process elasticity in the cloud

A generic framework for service-based business process elasticity in the cloud

2 Model for SBPs elasticity We are interested in this paper in modelling elasticity of SBPs. A SBP is a busi- ness process that consists in assembling a set of elementary IT-enabled services. These services realise the business activities of the considered SBP. Assembling services into a SBP can be ensured using any appropriate service composition specifications (e.g. BPEL). Elasticity of a SBP is the ability to duplicate or con- solidate as many instances of the process or some of its services as needed to handle the dynamic of received requests. Indeed, we believe that handling elas- ticity does not only operate at the process level but it should operate at the level of services too. It is not necessary to duplicate all the services of a considered SBP while the bottleneck comes from some services of the SBP.
En savoir plus

7 En savoir plus

A Unified Framework for Multimodal Structure-function Mapping Based on Eigenmodes

A Unified Framework for Multimodal Structure-function Mapping Based on Eigenmodes

a single fiber response function was computed for the white matter, gray matter, and cerebrospinal fluid using MRtrix3 4 (Tournier et al., 2019). These response function were then used to compute a fiber orientation distribution function for each voxel using contrained spherical deconvolution (Tournier et al., 2007). Two million streamlines were generated using anatomically constrained probabilistic tractography using a step of 0.3 mm, a maximum length of 300 mm, and backtracking (Smith et al., 2012). In an effort to make connectomes more quantitative and to reduce the impact of false positive connections (Maier-Hein et al., 2017) by re-establishing the biological interpretability of streamline-based structural connections, the tractograms were filtered using SIFT2 (Smith et al., 2015) assigning a weight to each streamline representing its cross-sectional area. The cortical surface extracted with FreeSurfer was parcellated using the Desikan–Killiany atlas into N = 68 regions. Given the parcellation and the streamlines, a first structural connectome was built by counting the number of streamlines connecting all pairs of cortical regions while ignoring self connections. A second connectome was built by summing the SIFT2 weights of streamlines connecting two cortical regions and a third by using the reciprocal average length of the streamlines connecting two regions. These three connectomes are identified as the count, SIFT2, and length connectomes respectively. In all cases, the connectomes were symmetrized by summing the (i, j) and (j, i) entries of the connectome. Finally, the structural connectomes were normalized by dividing by the sum of the off diagonal entries. While some authors removed weak connections we followed the recommendations of Civier et al. (2019) and omitted pruning thus keeping weak connections. The rational is that weak connections have little impact on values derived from connecivity matrices. It is therefore preferable to simplify the processing pipeline and reduce the number of arbitrary parameters by omitting this step.
En savoir plus

23 En savoir plus

Normal forms of foliations and curves defined by a function with a generic tangent cone.

Normal forms of foliations and curves defined by a function with a generic tangent cone.

Theorem 5.12. The distribution C is rationally integrable: there exists τ independent rational first integrals for C. We shall give two proofs of this result. The first one is a consequence of general facts about algebraic actions of algebraic groups. The second one is an algorithmic one, and furthermore proves that we can choose these first integrals in the ring C(a 1 , a 2 )[a 3 , · · · a p−3 ].

30 En savoir plus

CASToR: a generic data organization and processing code framework for multi-modal and multi-dimensional tomographic reconstruction

CASToR: a generic data organization and processing code framework for multi-modal and multi-dimensional tomographic reconstruction

The large variety of data formats also increases the diversity of reconstruction methods, particularly in PET. The reduction of individual detection elements’ size, Time of Flight (TOF) measurements, as well as the increasing use of dynamic studies (for motion correction or tracer kinetics based analysis) all contribute to larger and sparser histogrammed data sets. As a consequence, the use of list-mode data (Snyder & Politte 1983, Parra & Barrett 1998) as a direct input to reconstruction algorithms has gained much interest in PET (Yan et al 2012). List-mode data provide access to the initial measurement precision in terms of spatial and temporal resolution, but they are not compatible with some reconstruction algorithms, such as fast analytical algorithms or algorithms with specific optimizations for sinogram data sets (Slambrouck et al 2015). The multitude of data formats and reconstruction methods makes the development of generic codes difficult and causes practical issues for assessing, disseminating, and comparing new techniques. Often, the use of a new reconstruction technique is restricted to a given imaging modality and data format, even if in principle it could be compatible with other modalities or data formats. When the same algorithm is implemented in several contexts, implementation details differ more or less and produce results that are not strictly comparable. PET, SPECT and CT use similar components for tomographic reconstruction (e.g. projection operators, iterative optimizations, geometry descriptions). The wide use of iterative methods in PET an SPECT and the re-emergence of iterative reconstruction in CT (Beister et al 2012) suggest that reconstruction software for these three modalities can be efficiently integrated into a unified iterative reconstruction framework.
En savoir plus

24 En savoir plus

Show all 10000 documents...