L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

their preferred **programming** style, gradually introducing constraints to enjoy the benefits of stronger type checking and avail themselves of its richer functional **programming** features.
3.20. Future work “The derivative, as this notion appears in the elementary diﬀer- ential calculus, is a familiar mathematical example of a function for which both [the domain and the range] consist of functions.” –Alonzo Church [1941], The Calculi of Lambda Conversion The derivative, as commonly used, is usually associated with the calculus of infinites- imals. But the same rules for symbolic diﬀerentiation introduced by Leibniz and Newton over three centuries ago have reappeared in strange and marvelous places. In Brzozowski [1964], we encounter an example of symbolic diﬀerentiation in a discrete setting, i.e. regular expressions. Brzozowski’s work has important and far-reaching applications in automata theory [Berry and Sethi, 1986, Caron et al., 2011, Champarnaud et al., 1999] and incremen- tal parsing [Might et al., 2011, Moss, 2017]. Later in Thayse [1981] the boolean diﬀerential calculus was first introduced, 5 a branch of boolean algebra which has important applications in switching theory [Thayse and Davio, 1973] and synthesis of digital circuits [Steinbach and Posthoﬀ, 2017]. Symbolic diﬀerentiation has useful applications in other mathematical settings, including λ-calculus [Ehrhard and Regnier, 2003, Cai et al., 2014, Kelly et al., 2016, Brunel et al., 2020], incremental computation [Alvarez-Picallo et al., 2018, Alvarez-Picallo and Ong, 2019], type theory [McBride, 2001, 2008, Chen et al., 2012], category theory [Blute et al., 2006, 2009], domain theory [Edalat and Lieutier, 2002], probability theory [Kac, 1951] and linear **logic** [Ehrhard, 2016, Clift and Murfet, 2018].

En savoir plus
166 En savoir plus

6 Related work
The graph of tasks **programming** was previously used on Grid and Cloud. It has also been adapted from the distributed **systems** to the super-computer. Some of the main **programming** models using graph of tasks are listed in this section. The DAGMan (Directed Acyclic Graph Manager) is designed to run complex sequences of long-running jobs on the Condor [6] middleware. The DAGMan language allows to describe control dependencies between tasks but there is no high level statement to easily describe a graph. Moreover, the data dependencies are not explicit so there is no optimization possible for the data communications. Legion [7] is a data-centric **parallel** **programming** model. It aims to make the **programming** system aware of the structure of the data in the program. Legion provides explicit declaration of data properties and their implementation via the logical regions which are used to describe data and their locality. A Legion program executes as a tree of tasks spawning sub-tasks recursively. Each tasks specifies the logical region they will access. With the understanding of the data and their use, Legion can extract parallelism and find the data movement related to the specified data properties. Legion also provides a mapping interface to control the mapping of the tasks and the data on the processors.

En savoir plus
with all kind of intricacies due to binders and metavariables. Therefore it is hard for external programmers to contribute to the code base, for example to implement new domain specific tactics. The result is that these **systems** often implement in user space a second **programming** language, exposed to the user to write tactics, that takes care of binding, metavariables, back- tracking and its control. For example, LTac (Delahaye 2000) is such a **programming** language for Coq, that also supports sev- eral other mini-languages to let the user customize the behavior of the system (e.g. to declare canonical structures (Mahboubi and Tassi 2013), used to provide unification hints (Asperti et al. 2009)). Not only the system becomes more complex because of the need to provide and interpret a new **programming** lan- guage on top, but its semantics is problematic: the behavior of the system becomes the combination of pieces of code written in multiple languages and interacting in non trivial ways. Static analysis of this kind of code is out of reach.

En savoir plus
2.2 **Systems** of set constraints
A related line of work is program analysis **systems** [7] [1] [2] among others. They handle a larger class of sets (innite sets) than Conjunto, f log g or
CLPS. The set variables are introduced to model a program. The dierent resolution algorithms are based on transformation algorithms. These trans- formations preserve consistency either by computing a least model [7] which does not preserve all solutions or by computing a nite set of **systems** in solved form [1]. [2] demonstrated that the latter algorithm is solvable in non-deterministic exponential time.

En savoir plus
Intuitively, given a program written in some **programming** language (e.g., Fortran, Java, C), a DaS M is the collec- tion of all variables and constants appearing in the program, together with the range of values for the variables. No- tice that, while most **programming** languages are typed, here we consider an untyped formalism. We do so to keep the presentation more easily accessible to the general audience. The present formalism can be extended quite seamlessly to typed languages, even though the exercise is time- and space- consuming. Hereafter we distinguish between general data- aware **systems** and DaS in the technical sense of Def. 1.

En savoir plus
Formally, we chose the use of nonmonotonic **logic** as its study has been put forward by A.I. researchers as a way to handle the kind of defeasible generali- sations that pervade much of our commonsense reasoning, and that are poorly captured by classical **logic** **systems** [11]. The term covers a family of formal frame- works devised to apprehend the kind of inference in which conclusions stay open to modification in the light of new information. On a regular basis it seems we draw conclusions from bodies of data that can be dropped when faced with new data. For example, we will hold that a certain bird can fly, until we learn that it is a penguin. This kind of default based reasoning is significantly present in ethical reasoning: we may determine the moral value of an action, for example theft, dif- ferently depending on surrounding information. Such factors as the presence of alternative options, indirect consequences, or extenuating circumstances might overthrow our ethical judgement. Accordingly, nonmonotonic goal specification languages are particularly well suited to modelling ethical reasoning.

En savoir plus
For microﬂuidic analysis, experiments were performed using a mother machine microﬂuidic device consisting in arrays of **parallel** chambers (1 µm × 1 µm × 25 µm) connected to a large channel. Chambers were fabricated using electron-beam lithography on SU-8 photoresist (MicroChem), while the channel was fabricated using soft-lithography. From the subsequent master wafer, microﬂuidic chips were molded in polydimethylsiloxane (PDMS) and bonded to a glass slide using plasma activation. Cells, grown overnight in LB supplemented with Cam and Kan, were then loaded into the chambers by centrifugation on a spin coater using a dedicated 3D printed device. LB media ﬂown in the mother machine are supplemented with Cam and Kan, but also with 5 g l −1 F-127 pluronic to passivate the PDMS surfaces and prevent cell adhesion. The medium diffuses to the chambers, providing nutrients and chemicals of interest to cells. Chemical inducers (aTc at 200 ng/mL and arabinose at 1%) were added to the media as required using solenoid valves (The Lee Company). A peristaltic pump was used to ﬂow the various mediums through the device at a ﬂow rate of 90 μL/min. Both the microﬂuidic device and the medium were constantly held at 37 °C. Images were obtained using an inverted Olympus IX83 microscope with a ×60 objective. Fluorescence levels were measured within a small rectangular region of interest located at the top of each chamber where a single cell is trapped.

En savoir plus
ground terms. Typically a proof term is incrementally built by the elaborator: start- ing from a metavariable that has the type of the conjecture, the proof commands make progress by instantiating it with partial terms. Once there are no unresolved metavariables left, the ground term is checked, again and in its totality, by the kernel. Extensibility of the elaborator. Finally, the elaborator is the brain of the system, but it is oblivious of the pragmatic ways to use the knowledge in the prover library, e.g. to automatically fill in gaps ( Gonthier et al. , 2013 ; Asperti et al. , 2009 ), to coerce data from one type to another ( Luo , 1996 ) or to enrich data to resolve mathematical abuse of notation ( Sacerdoti Coen and Tassi , 2009 ). Therefore **systems** provide ad- hoc extension points to increase the capabilities of the elaborator. The languages to write this code are typically high-level, declarative, and try to hide the intricacies of bound variables, metavariables, etc. to the user. The global algorithm is therefore split in multiple languages, defying the hope for static analysis and documentation of the elaborator.

En savoir plus
─ domain specific properties verification [16,17,18,26] consists in “finding undesirable properties, such as redundant or contradictory information”;
In this paper, we are interested in the conformance checking of Product Line Models (PLMs). As many works show it [1,10,24,26], product lines engineering is a specific topic of **Systems** Engineering that requires adequate models, meta-models, methods, and tools. We are particularly interested in a kind of consistency verification called conformance checking where “it is checked that a model satisfies the constraints captured in the meta-model, i.e., that the model is indeed a valid instance of the meta-model” [32]. The problem in the context of product lines is that verification cannot be achieved at the level of products because these product models are not instantiated from their meta-models, but by configuration of PLMs. The expectation is that conformance checking is achieved at the PLM level, with the assumption that any product model that can be configured from a correct PLM is itself correct. On the semantic level, a product line model is defined as the collection of all the product models that can be derived from it. Therefore checking the conformance of the product line model is equivalent to checking the conformance of all the product models in stage configuration [47]. However, we would like to avoid verifying all the product models because their number can be simply too high [17]. The naïve approach that consists in carrying out product model verification by checking late their conformance with the product line meta-model is also not scalable to real world constraints. We believe that scalable methods, techniques and tools are needed to deal with this important issue [32], which, to the best of our knowledge, is not properly handled by tools. Our literature study revealed that (a) conformance checking approaches that check all the product models of the PLM do not scale to real size models [6], and (b) the checking of larger models is sometimes even unrealizable due to the impossibility to configure all products [16, 17].

En savoir plus
The typing system of λ ` , in its non-linear variant, has been very well studied: computationally, it was famously interpreted by Parigot [30] as λµ-calculus; proof-theoretically, it was thoroughly investigated by [12]. Although proof-nets have sometimes been dubbed “the natural deduction of linear **logic**”, our typing system is closer to a natural deduction. It is not exactly natural deduction [33], since it is multiple-conclusion, hence it is natural for building and typing **parallel** programs, but not much so for modeling human deduction. However, our system is based on natural-deduction normalization rather than sequent- calculus cut-elimination and linear implication ( is a primitive connective, with the standard introduction and elimination rules. Like proof-nets (see [17], [19]), λ ` abstracts away from those inessential permutations in the order of rules that plague multiple-conclusion logical **systems**. As a result, λ ` avoids commuting conversions, which have never been convincing from the computational point of view. Actually, λ ` is not only a concurrent λ-calculus, it also looks like a natural deduction version of proof nets. Finally, it enjoys all the good properties that a well-behaved functional **programming** language should have: subject reduction, progress, strong normalization, and confluence. It is a step forward in the direction of that elusive concurrent λ-calculus which Milner attempted to find, before creating CCS out of the failure 1 .

En savoir plus
Hol in the example. Instead, destruction of terms is allowed everywhere.
In contrast, in LP languages, the action of inspecting data and building data are conflated into unifi- cation, which is a cornerstone of the relational semantics of **logic** **programming**. Still, many LP **systems** let the user annotate predicates with their intended modes. As shown in [ 2 , Section 3], in then the execu- tion of a well-moded program unification is equivalent to matching if this condition holds: (*) the query arguments in input position are ground. This observation suggests a way to introduce private types in LP based on the well-understood mechanisms of modality and typing.

En savoir plus
with respect to M a priori as well. For M = {p}, ¬¬p would directly hold, and so, rule ( 1 ) would just behave as a fact for p, making it true.
In this paper, we provide a general deﬁnition of well-supportedness for pro- grams with a head atom and a Boolean formula in the body. This deﬁnition is parametrized in two ways: (1) the type of formulas that can be used as “assump- tions,” that is, whose truth is ﬁxed with respect to some model M ; and (2), the monotonic **logic** that deﬁnes satisfaction of a rule body before applying the rule to derive a new conclusion. For (1), we study three cases: negated atoms, negated literals, and negated arbitrary formulas. For (2), we analyse the whole range of intermediate logics, from intuitionistic to classical **logic**, both included. In the paper, we prove that a group of variants collapse either into Equilibrium **Logic** or Clark’s completion. To compare the diﬀerent alternatives, we analyse one more property we call atom deﬁnability. This property asserts that if we replace occurrences of a formula ϕ in one or more rule bodies by a new auxiliary atom

En savoir plus
5. When are the postulates satisfied?
In a previous section, we defined five rationality postulates for argumentation-based reasoning models. An important question now is: are there argumentation **systems** that may satisfy them? If yes, what are the characteristics of those **systems**? These questions are very ambitious since an argumentation system has three main parameters: an underlying monotonic **logic** (L, CN ) , an attack relation R and a semantics. In this paper, the three parameters are left unspecified. Thus, getting a complete answer is a real challenge. In this section, we identify one family of argumentation **systems** that satisfy closure under the consequence operator, three broad families of argumentation **systems** that satisfy closure under sub-arguments, a broad family of **systems** that satisfy consistency. We show also that free precedence is satisfied under most existing semantics. The results are general in the sense that they hold under any Tarskian **logic**, any attack relation that fulfills the mandatory properties discussed in the previous section, and also under any of the reviewed semantics. Some of the results are even true for any semantics that is based on conflict-freeness, i.e., that defines conflict-free extensions.

En savoir plus
Contrary to system margin maximization, the BER minimization needs the knowledge of constellation and coding schemes, and it is based on accurate expressions of BER functions. In this paper, the used constellations are QAM, and the optimization is performed without a channel coding scheme. When dealing with practical coded **systems**, the ultimate measure is the coded BER and not the uncoded BER. However, the coded BER is strongly related to the uncoded BER. It is then generally sufficient to focus on the uncoded BER when optimizing the uncoded part of a communication system [26].

Unité de recherche INRIA Rocquencourt Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex France Unité de recherche INRIA Lorraine : LORIA, Technopôle de Nancy-Brabois -[r]

gramming (ASP), other approaches have been proposed trying to overcome some
features on which no agreement seems to have been reached so far. For instance, one of those properties pursued by some authors is that stable models of a pro- gram should be minimal with respect to the set of their true atoms. Although this holds for disjunctive **logic** programs in all ASP semantics, the ﬁrst proposals for negation in the head (or double negation in the body) [ 3 ] already violated minimality, this being also the case of Equilibrium **Logic**, which is a conservative extension. For instance, a common way to represent a choice rule in Equilibrium **logic** is using the expression:

En savoir plus
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

Conclusion
In this study, we used a specific approach to study and understand the heterogeneous gene expression profiles of approximately 600 multiple myeloma (MM) patients. Our primary goal was to provide mechanistic scenarios by identifying protein activity states of molecules that may be central to the diversity of gene expression. Our approach relies heavily on reasoning based on graphs and on changes in gene expression in the form of logical programs that combine these two types of information. The method proposed here can be summarized in the fol- lowing steps. First, we obtained a directed graph, allowing us to connect significantly up-/down-regulated genes to upstream MM-related cellular receptors. Second, we confronted this graph to transcriptomic data with IGGY, which is a tool that reasons on the **logic** of the graph and on shifts of expression in the data so as to predict (node, sign) assignments representing the specific states of biological entities. Using two approaches of classification, we were able to identify specific assignments for MC datasets compared to NPC datasets. Finally, taking advantage of our modeling framework, we studied the effect of performing single in silico perturbations.

En savoir plus
It is important to notice that **parallel** TS algorithms exist for other problem classes. In fact, De Falco et al. [27] proposed a simple **parallel** variant of TS based on exchanging best solutions. Al-Yamani et al. [28] developed a heterogeneous **parallel** TS algorithm for the VLSI placement problem, which integrates the first and third schemes sketched previously. Bort- feldt et al. [29] designed a distributed **parallel** TS metaheuristic for the container loading problem, in which each process periodically adopts solution from its predecessor and restarts from the adopted solution. Attanasio et al. [30] also proposed a distributed **parallel** TS metaheuristic for the dynamic multi- vehicle dial-a-ride problem, in which two different cooperation strategies are investigated. Banos et al. [31] presented a **parallel** metaheuristic for the graph partitioning problem which hybridizes Simulated Annealing (SA) and TS. Blazewicz et al. [32] proposed a master-slave **parallel** TS algorithm for the two-dimensional cutting problem. Le Bouthillier and Crainic [33] proposed a cooperative **parallel** metaheuristic for the vehicle routing problem with time windows, in which TS processes and EA processes are executed in **parallel**. Talbi and Bachelet [34] proposed a **parallel** metaheuristic for the quadratic assignment problem, which uses TS as the main search agent. Maischberger [35] proposed a synchronous distributed **parallel** metaheuristic for the vehicle routing prob- lems, in which each process executes ILS extended with TS. It is worth noticing that the **parallel** algorithm proposed in this paper is different from the aforementioned **parallel** algo- rithms, essentially because it uses a novel cooperation strategy and, more importantly, a finely-tuned UBQP-dedicated bit-flip perturbation operator.

En savoir plus