Haut PDF Directional types for logic programs and the annotation method

Directional types for logic programs and the annotation method

Directional types for logic programs and the annotation method

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

47 En savoir plus

Learning any semantics for dynamical systems represented by logic programs

Learning any semantics for dynamical systems represented by logic programs

a single trajectory and thus our setting can be considered as a generalized apperception task. Another major difference is that they only consider deter- ministic inputs while we also capture non-deterministic behaviors. Given the same kind of single trajectory and a DMVLP (or CDMVLP), it should be possible to produce candidates past states or to try to fill in missing values. But in practice that would suppose to have many other transitions to build such DMVLP using GULA while the Aperception Engine can perform the task with only the given single trajectory. This system can also produce a set of constraints as well as rules. The constraints of CDMVLP can prevent some combinations of atoms to appear, but only in next states, while in [10, 11], constraints can prevent some states to exist anywhere in the sequence, and ensure the conservation of atoms. From Theorem 8, the conservation can also be reproduced by CDMVLP by the right combination of optimal rules and constraints. In [25] the authors propose a general framework named ILASP for learning answer set programs. ILASP is able to learn choice rules, constraints and preferences over answer sets. Our problem settings is related to what is called “context-dependant” tasks in ILASP. Our input can be straightfor- wardly represented using ILASP when variables are Boolean, but the learned program does not respect our notion of optimality, and thus our learning goals differ, i.e., we guarantee to miss no potential dynamical influence. [19] proposes an incremental method to learn and revise event-based knowledge in the form of Event Calculus programs using XHAIL [34], a system that jointly abduce ground atoms and induce first-order normal logic programs. XHAIL needs to be provided with a set of mode declarations to limit the search space of pos- sible induced rules, while our method do not require background knowledge. Still it is possible to exploit background knowledge with GULA: for example one could add heuristic inside the algorithm to discard rules with “too many” conditions; influences among variables, if known, could also be exploited to re- duce possible bodies. Finally, XHAIL does not model constraints, thus is not able to prevent some combinations of atoms to appear in transitions, which can be achieve using our Synchronizer.
En savoir plus

43 En savoir plus

Quantitative Separation Logic and Programs with Lists

Quantitative Separation Logic and Programs with Lists

Regarding program analysis, the use of abstract domains (including integers and memory addresses) with quantifiers of the form ∃ ∗ ∀ ∗ has been considered in the work of Gulwani et al. [13, 12]. Unlike our approach, their work is based on using abstrac- tions that prove to be sufficient, in general, for checking correctness of a large body of programs. Some of our examples, such as InsertSort, are also verified using the method of [12]. Recently, Magill et al. [17] report on a program analysis technique that uses Separation Logic [21] extended with first-order arithmetic. However, the main empha- sis of [17] is a program analysis based on counterexample-driven abstraction refine- ment, whereas our work focuses on distinguishing decidable from undecidable when combining Separation Logic with first-order arithmetic. As a matter of fact, [17] claims that validity of entailments in the purely existential fragment of Separation Logic with the ls k (x,y) predicate and linear constraints is decidable, without giving the proof, by
En savoir plus

17 En savoir plus

Enrichment of French Biomedical Ontologies with UMLS Concepts and Semantic Types for Biomedical Named Entity Recognition Though Ontological Semantic Annotation

Enrichment of French Biomedical Ontologies with UMLS Concepts and Semantic Types for Biomedical Named Entity Recognition Though Ontological Semantic Annotation

Medical terminologies and ontologies are a crucial resource for semantic annotation of biomedical text. In French, there are considerably less resources and tools to use them than in English. Some terminologies from the Unified Medical Language System have been translated but often the identifiers used in the UMLS Metathesaurus, that make its huge integrated value, have been ‘lost’ during the process. In this work, we present our method and results in enriching seven French versions of UMLS sources with UMLS Concept Unique Identifiers and Semantic Types based on information extracted from class labels, multilingual translation mappings and codes. We then measure the impact of the enrichment through the application of the SIFR Annotator, a service to identify ontology concepts in free text deployed within the SIFR BioPortal, a repository for French biomedical ontologies and terminologies. We use the Quaero Corpus to evaluate.
En savoir plus

9 En savoir plus

Linearizing some recursive logic programs

Linearizing some recursive logic programs

1 Introduction. We apply fixpoint techniques together with language theory tools to derive sim- ple algorithms for answering some queries on recursive logic programs. We give sufficient conditions on the query and the logic program which enable us to find an iterative program, i.e. a program containing only right-linear recur- sions, computing exactly the relevant facts needed to answer the query. The method consists of first characterizing the semantics of the logic program us- ing fixpoint theory tools, via algebraic or denotational methods. We compute syntactically the least fixpoint of the logic program in a Herbrand model, then interpret this least fixpoint in the actual domains. The syntactic expression of the least fixpoint can be expressed in language theory terms, as a language L(P ), depending on the syntax of P . Then, using language theory tools, we give sufficient conditions on P which ensure that L(P ) will be a rational (or regular) language. Hence, we can find an equivalent iterative (or right linear) program P 0 such that L(P ) = L(P 0 ), which will thus give the same answers to queries as P . This program P 0 provides us with an efficient and easy algorithm to answer queries on P. The present method applies to a popular class of programs called chain Horn clauses; it can also be extended to programs allowing for the use of aggregate functions provided that they are stratified and that the evaluation algorithms preserve stratification. Linearization of recursive logic programs has been extensively studied in recent deductive database research; we survey in the
En savoir plus

27 En savoir plus

The Tableau Method for Temporal Logic: An Overview

The Tableau Method for Temporal Logic: An Overview

Synthesis o f Concurrent Programs A direct use of the decision procedure we described in this paper has been the synthesis o f the synchronization part o f concurrent programs. I f one assumes that the various parts o f a concurrent program only interact through a finite number of signals, then their interaction can be specified in propositional temporal logic. Now, if one applies the tableau decision procedure to this specification, one obtains a graph that can be viewed as a program satisfying those specifications. Indeed, all executions of the program (paths through the graph) satisfy the specification (if one ensures that eventualities are satisfied). This approach was developed in [Wo82] and [MW84] using a linear time temporal logic and in [CE81] using a branching time temporal logic. A more informal approach to synthesis from temporal logic specifications appears in [RK80].
En savoir plus

18 En savoir plus

Logic and Linear programs to understand cancer response

Logic and Linear programs to understand cancer response

The perfect coloring approach and the classifier was applied to the data of the Multiple Myeloma DREAM challenge 1 . The objective of this challenge was to clas- sify the MM patients labeled as high risk. They provided to the methodological community large MM patient cohorts (25000 patients) where patient gene expres- sion profiles and risk information were measured by different US laboratories. We tested our method with 2 sets of gene expression profiles: HOVON (GSE19784, 274 GEPs) and UAMS (GSE24080, 558 GEPs). The graph was a gene regulatory network generated with the Trrust database by querying the significantly expressed genes in the intersection of both datasets. The graph of 447 nodes and 600 edges, was reduced to 30 components with the perfect coloring approach. After this, we ap- plied XGBoost to learn a classifier from the HOVON dataset to predict the UAMS dataset and vice versa, and obtained precision rates of 0.75 and 0.71 respectively. Our precision rate was not satisfactory when comparing it to the one obtained by the other teams participating in the DREAM challenge using gene expression profiles provided by different research institutes, other than HOVON and UAMS. We be- lieve our method is very sensitive to the initial graph; it is important that this graph contains all the significantly expressed genes across all GEPs provided by all the research centers. We were unable to verify this since for this DREAM challenge in particular the testing data is not made available to the community. Finally, this approach can be used to study divergences among the datasets provided by different experimental platforms or in this case by different research laboratories. Such study is crucial to check if multiple datasets can be merged in order to create a larger one. A large set would provide more training examples for the perfect coloring model, and this would certainly improve its accuracy. For this, we calculated the expected value as well as the standard deviation for the distributions of similarity scores for each of the 30 components across both sets of profiles (HOVON vs. UAMS). We ob- serve that 7 out of 30 distributions have an expected value of the similarity score at a distance equal or greater than 0.07, such as component 7 for example (see Fig. 3). This means that we can identify regulatory mechanisms within the network pointing to regions where the experimental data provided diverges. Note that in this analysis, we supposed that the similarity scores of each component are normally distributed, so that we are able to plot their distributions and compare them. Similarity scores are linear combinations of gene expression levels and they will be normally distributed if and only if all gene expression levels can be modeled as independent random variables normally distributed.
En savoir plus

21 En savoir plus

A Decidable Subtyping Logic for Intersection and Union Types (full version)

A Decidable Subtyping Logic for Intersection and Union Types (full version)

the pure lambda-term M is a realizer (also read as “M is a method to assess σ” ) for either the formula r A [M ] and r B [M ]. Inspired by this, Barbanera and Martini tried to answer to the question of realizing other “proof-functional” connectives, like strong implication, or Lopez-Escobar’s strong equivalence or provable type isomor- phism of Bruce, Di Cosmo and Longo [ BL85 , BCL92 ]. Recently [ DdLS16 ] extended the logical interpretation with union types as another proof-functional operator, the strong union ∪. Paraphrasing Pottinger’s point of view, we could say that the intuitive meaning of ∪ is that if we have a reason to assert A (or B), then the same reason will also assert A ∪ B. This interpretation makes inhabitants of (A ∪ B) ⊃ C be uniform evidence for both A ⊃ C and B ⊃ C. Symmetrically to intersection, and extending the Mints’ logical interpretation, the logical predicate r A∪B [M ] succeeds
En savoir plus

23 En savoir plus

Combining Linear Logic and Size Types for Implicit Complexity

Combining Linear Logic and Size Types for Implicit Complexity

Univ Lyon, CNRS, ENS de Lyon, Universit´ e Claude-Bernard Lyon 1, LIP, F-69342, Lyon Cedex 07, France 2 ENS Paris-Saclay, France Abstract. Several type systems have been proposed to statically control the time complexity of lambda-calculus programs and characterize complexity classes such as FPTIME or FEXPTIME. A first line of research stems from linear logic and defines type systems based on restricted versions of the ”!” modality controlling duplication. An instance of this is light linear logic for polynomial time computation [Girard98]. A second perspective relies on the idea of tracking the size increase between input and output, and together with a restricted use of recursion, to deduce from that time complex- ity bounds. This second approach is illustrated for instance by non-size-increasing types [Hofmann99]. However both approaches suffer from limitations. The first one, that of linear logic, has a limited in- tensional expressivity, that is to say some natural polynomial time programs are not typable. As to the second approach it is essentially linear, more precisely it does not allow for a non-linear use of functional arguments. In the present work we adress the problem of incorporating both approaches into a common type system. The source language we consider is a lambda-calculus with data-types and iteration, that is to say a variant of Godel’s system T. Our goal is to design a system for this language allowing both to handle non-linear functional arguments and to keep a good intensional expressivity. We illustrate our methodology by choosing the system of elementary linear logic (ELL) and combining it with a system of linear size types. We discuss the expressivity of this new type system and prove that it gives a characterization of the complexity classes FPTIME and 2k-FEXPTIME, for k ě 0.
En savoir plus

37 En savoir plus

Translating types and effects with state monads and linear logic

Translating types and effects with state monads and linear logic

Multithreading: what is lacking the most with respect to other proposals of calculi (or type systems) is multi- threading and concurrency. Indeed the starting objective of this work was to combine call-by-value translation of λ-calculus together with the communication zones which were employed in [ 10 ] for a bisimulation between (a frag- ment of) π-calculus and differential nets. Indeed by slightly generalizing to differential nets and non-determinism the translation presented in this work and combining it with elements of [ 10 ], one gets a translation of a multithreaded version of the calculus. However the target nets are very easily cyclic. For example set(r, get(r))| set(r, get(r)), which may in general be any two threads cooperatively updating a shared variable, is (it seems) necessarily cyclic. No particular computational property can be therefore entailed, save for simulation. The problem seems to linked with how logic in general and proof nets in particular handle dependency. In proof nets dependency (which may be tracked with switching paths) can never be created. In particular in set(r, get(r))| set(r, get(r)) there is a potential dependency of each of the get’s from the other set, so that from the logical point of view there is a circular dependency which is somewhat hidden by prefixing. Indeed also in π-calculus’ translation a simple process like c(x).c hxi |c(x).c hxi is mapped to a cyclic net. It seems then the only direction for truly using linear logic with concurrency is either to restrict programs in order to fall within LL’s scope (such as forbidding processes like the one pointed above), or rather find a new meaning to correctness to account for such concurrent behaviours.
En savoir plus

15 En savoir plus

Computer programs for solving notch problems using Nisitani's body force method

Computer programs for solving notch problems using Nisitani's body force method

comput er programs that were used to determine the stress intensity factors or stress concentration factors for the five types of notch and crack problems described in tha t repo[r]

80 En savoir plus

Behavioural Types for Memory and Method Safety in a Core Object-Oriented Language

Behavioural Types for Memory and Method Safety in a Core Object-Oriented Language

Our type system is a sound approximation of the reduction relation (c.f., § 4) and rejects programs that ”may go wrong” [26]. We return to this type safety result as Theorem 2 in § 5. The main intentions behind our type system are to ensure that every object will follow its specified protocol, that no null pointer exceptions are raised, and no object reference is lost before its protocol is completed. The system lets us type classes separately and independently of a main method or of client code. Following Gay et al. [15], when we type a class, we type its methods according to the order in which they appear in the class usage. This approach to type checking is crucial. For suppose we call a method m of an object o that also has a field containing another object o 0 . This call will not only change the typestate of o (admitting the method was called at a moment allowed by the usage). The call can also change the state of o 0 , since the code of method m may contain calls to methods found in o 0 . With the type-checking system we present herein, we take an important step further: by giving a special type to null and make a careful control of it in the typing rules, we manage to prevent memory leaks and detect previously allowed null-pointer exceptions.
En savoir plus

21 En savoir plus

A Refinement-Based Validation Method for Programmable Logic Controllers

A Refinement-Based Validation Method for Programmable Logic Controllers

I. Introduction Programmable logic controllers (PLCs) are widely used for safety critical applications in various industrial fields. The correctness and reliability of PLC programs are of great importance. Two distinguished features of PLCs programs are the use of timers and the cyclic behavior. In our previous work [1], we formally modeled and proved the correctness of a PLC program with timer control. The behavior of the program is represented by a set of sequences of systems states. Timers are explicitly modeled by a set of axioms over sequences. The model is at the scan cycle level, which means that at the beginning of each scan cycle, a system state is sampled to form the sequence. When compared to the rung level model (i.e., a system state is sampled before the execution of each rung to form a sequence), the scan-cycle level model is an abstract model. In [1], five assumptions and constraints are proposed to ensure the correctness of the abstract model (scan-cycle level model), but the correctness is not formally proved. In this paper, we continue our work: the correctness of the abstract model is formally proved in the theorem proving system Coq based on translation validation.
En savoir plus

5 En savoir plus

A new method for evaluating the impacts of semantic similarity measures on the annotation of gene sets

A new method for evaluating the impacts of semantic similarity measures on the annotation of gene sets

whether similar results would be found. We chose to investigate nine semantic similarity measures that rely on an exhaustive panel of GO terms’ features. Notably, we did not consider some recent measures that use other rela- tions than is_a. In particular, Wang et al. proposed a semantic similarity measure that consid- ers part_of relations [ 43 ]. We believe that using all types of relations (i.e., hierarchical and transversal) is an interesting approach and that axioms should also be considered, as described by Ferreira et al. [ 44 ]. Axioms can be used to express the meaning of concepts and relations between concepts within ontologies [ 45 ]. Thus, if the meaning of the GO terms was fully described (with a logical definition based on axioms), the GO terms could be better distin- guished from their siblings (or other related terms). Some efforts have been recently made to enrich GO with such axioms [ 46 , 47 ], opening up perspectives for proposing semantic similar- ity measures relying on their richness.
En savoir plus

23 En savoir plus

Par Means Parallel: Multiplicative Linear Logic Proofs as Concurrent Functional Programs

Par Means Parallel: Multiplicative Linear Logic Proofs as Concurrent Functional Programs

11 Related Work and Conclusions λ ` and Linear Session Typed π-Calculi. An established way to interpret linear logic proofs into concurrent programs is by interpreting sequent calculi for linear logic into π-calculi with session types, see [10, 37, 21]. In those calculi, a session type is a logical expression containing information about the whole sequence of exchanges that occur between two processes along one channel. Session types are attached only to communication channels, therefore they only describe the channels’ input/output behaviour, not the processes using them. In λ ` , as in the tradition of the Curry-Howard correspondence, types are in- stead attached to processes, are read as process specifications and are employed to formally guarantee that processes will behave according to the specifications expressed by their type. In other words, while the goal of session types is to for- mally describe process interactions, hence the dynamics of communication, the types of λ ` formally describe the result of the computation. Session types fo- cus on how computation proceeds, the types of λ ` focus on what computation accomplishes. Nevertheless, since the nature of π-calculus processes is deter- mined by their channels’ behaviour, specifying channels means also specifying processes, hence the difference with λ ` is not as significant as it may appear.
En savoir plus

37 En savoir plus

Splitting Epistemic Logic Programs

Splitting Epistemic Logic Programs

appointment(X) ← K interview (X) (8) The two answer sets of program {( 1 )–( 5 )} contain interview (mike) and so appointment(mike) can be added to both answer sets incrementally. This method of analysing a program by division into independent parts shows a strong resem- blance to the splitting theorem [ 9 ], well-known in standard ASP. Splitting is applicable when the program can be divided into two parts, the bottom and the top, in such a way that the bottom never refers to head atoms in the top. When this happens, we can first compute the stable models of the bottom and then, for each one, simplify the top accordingly, getting new stable models that complete the information. We could think about different ways of extending this method for the case of epistemic logic programs, depending on how restrictive we want to be on the programs where it will be applicable. However, we will choose a very conservative case, looking for a wider agreement on the proposed behaviour. The condition we will impose is that our top program can only refer to atoms in the bottom through epistemic operators. In this way, the top is seen as a set of rules that derive facts from epistemic queries on the bottom. Thus, each world view W of the bottom will be used to replace the subjective literals in the top by their truth value with respect to W . For the sake of completeness, we recall next the basic definitions of ASP and splitting, to proceed with a formalization of epistemic splitting afterwards.
En savoir plus

16 En savoir plus

Splitting Epistemic Logic Programs

Splitting Epistemic Logic Programs

appointment(X) ← K interview (X) (8) The two answer sets of program {( 1 )–( 5 )} contain interview (mike) and so appointment(mike) can be added to both answer sets incrementally. This method of analysing a program by division into independent parts shows a strong resem- blance to the splitting theorem [ 9 ], well-known in standard ASP. Splitting is applicable when the program can be divided into two parts, the bottom and the top, in such a way that the bottom never refers to head atoms in the top. When this happens, we can first compute the stable models of the bottom and then, for each one, simplify the top accordingly, getting new stable models that complete the information. We could think about different ways of extending this method for the case of epistemic logic programs, depending on how restrictive we want to be on the programs where it will be applicable. However, we will choose a very conservative case, looking for a wider agreement on the proposed behaviour. The condition we will impose is that our top program can only refer to atoms in the bottom through epistemic operators. In this way, the top is seen as a set of rules that derive facts from epistemic queries on the bottom. Thus, each world view W of the bottom will be used to replace the subjective literals in the top by their truth value with respect to W . For the sake of completeness, we recall next the basic definitions of ASP and splitting, to proceed with a formalization of epistemic splitting afterwards.
En savoir plus

15 En savoir plus

Separation Logic for Sequential Programs

Separation Logic for Sequential Programs

Krishnaswami et al. [ 2007 ] formalized the subject-observer pattern with a strong form of information hiding between the subject and the client. This work illustrated how higher-order Separation Logic supports data abstraction. Birkedal et al. [ 2005 , 2006 ] tackled the generalization of Separation Logic to higher-order languages, where functions may take functions as arguments. To avoid complications with mutable variables, the authors considered a version of Algol with immutable variables and first-order heaps—heap cells can only store integer values. Specifications are presented using dependent types: a triple {H } t {Q } is expressed by the fact that the term t admits the type “{H } · {Q }”. One key idea from this work is to bake-in the frame rule into the interpretation of triples, that is, to quantify over a heap predicate describing the rest of the state, as in Definition 5.2 . The technique of the baked-in frame rule later proved successful in mechanized proofs. For example, it appears in the HOL4 formalization by Myreen and Gordon [ 2007 ] (see §3.2, as well as §2.4 from Myreen’s PhD thesis [ 2008 ]) and in the Coq formalization by [ Appel and Blazy 2007 ] (see Definition 9).
En savoir plus

44 En savoir plus

A limited set of transcriptional programs define major cell types

A limited set of transcriptional programs define major cell types

Methods RNA isolation, library construction, and sequencing For each cell type to be made into a library, we obtained cell pellets that were stored in RNAlater (Thermo Fisher Scientific) as catalog items from PromoCell (https://www.promocell.com) and Scien- Cell (https://www.sciencellonline.com/) (for a list of primary cells, see Supplemental Table S1 ). In short, the RNA was isolated from sorted cells based on cell morphology and cell surface markers. Each cell type was passaged to expand the cell numbers for 24– 48 h (1-2 doublings) before total RNA extraction and shipping. Thus, this protocol represents a minimum of exposure to non-na- tive conditions. The cell morphologies are checked at this time. Al- though it is clear that the molecular context (influence of external cytokins and neighboring cells) of these cells has changed, they cluster in a very similar fashion to profiles shown by single-cell isolates of the corresponding types. Thus, the limited passage has an unlikely effect on the gene expression program. We rely on the providers’ standards for quality assurance. Quality sheets are available through the ENCODE portal (https://www.encode project.org/search/?type=Biosample&organism.scientific_name= Homo+sapiens&biosample_ontology.classification=primary+cell& lab.title=Thomas+Gingeras%2C+CSHL&source.title=PromoCell& award.rfa=ENCODE3). We ordered three vials per cell type per donor for a total of 3 million cells. The three vials were combined, and we isolated total RNA from them using the Ambion mirVana miRNA Isolation kit (AM1561). The rRNA was removed using the RiboZero Gold Protocol (RZG1224). The libraries are made using a homebrew “dUTP” protocol (Parkhomchuk et al. 2009), which generates stranded libraries. They were sequenced on the Illumina platform in mate-pair fashion and processed through the data pro- cessing pipeline at the ENCODE DCC. Additional information about each of these steps, metadata, and files can be found at https://www.encodeproject.org/.
En savoir plus

15 En savoir plus

Extending INET Framework for Directional and Asymmetrical Wireless Communications

Extending INET Framework for Directional and Asymmetrical Wireless Communications

5.1.2 Omni-directional vs. Directional Communications The goal of the second simulation is to evaluate our DirectionalRadio by com- paring the network throughput when using omni-directional and directional an- tennas. For this purpose, we build a mesh network simulation, with 10 nodes containing 2 bridged radio interfaces, and 2 client hosts with a single radio in- terface. In this scenario, the radio parameters for the hosts are chosen such that directional antennas are able to “hear” up to 5 antennas placed in their same direction of orientation, even when they are supposed to communicate only with their immediate neighbors. We determined that using a network with 10 hosts is enough to create strong interference between antennas and to clearly observe the behavior of communications. All antennas in the simulation have the same maximum transmission range, and the radio interfaces are configured as shown in Fig. 5, using the same channel. In this scenario, a TCP stream is transmitted first time from client1 to client2 through the mesh network, using omni-directional antennas (1st case) and then, a second time using directional antennas (2nd case); network throughput was monitored during the experiment.
En savoir plus

23 En savoir plus

Show all 10000 documents...