Abstract. Legal texts are the foundational resource where to discover rulesandnorms that feed into different concrete (often XML-based) Web applications. Legislative documents provide general normsand specific procedural rules for eGovernment and eCommerce environments, while contracts specify the condi- tions of services and business rules (e.g. service level agreements for cloud computing), and judgments provide information about the legal argumentation and interpretation of norms to concrete case-law. Such legal knowledge is an important source that should be detected, properly modeled and expressively represented in order to capture all the domain particularities. This paper pro- vides an extension of RuleML called LegalRuleML for fostering the character- istics of legal knowledge and to permit its full usage in legal reasoning and in the business rule domain. LegalRuleML encourages the effective exchange and sharing of such semantic information between legal documents, business rules, and software applications.
Given now that this sort of argument consistently appeared at two crucial stages in the development of EUF, we may wonder why it has been mostly ignored so far. It can be in part explained by the little appreciation for experiments that economists and philosophers alike showed until very recently. As a matter of fact, on the economist’ side, experiments on EUT brought again to the fore its normative dimension [*], though without much debate as to the justification of the prescriptive rules derived therefrom. As for the philosophers, at least in the analytic tradition, experiments are just emerging as a source of relevant evidence to test moral theories [*]. We may expect then that a more naturalist approach to normativity will bring more attention to the arguments presented by our three authors. Yet, even on this basis, it may be argued that they just constitute a very preliminary attempt at the empirical justification of decision rules. First, because the identification of the population of experts (be it merchants, insurers or statisticians) whose decision criteria should be rendered explicit it is as such problematic. It is a broad social category in which we should draw a sample of exemplary decision-makers. Who do we take as such is in itself a matter of normative controversy, that should be settled before any experiment takes place. In addition, since the normative justification we are looking for is consequential, we need to show that these experts owe their practical success —at least in part— to the systematic application of the decision rules under study, and not just to mere chance. Otherwise, the imitation of the experts would yield no prospect of consequential interest. Still, those who want to explore this experimental approach to normative issues can claim the authority of several founding fathers of decision theory to vindicate their enterprise.
2 Prioritised abstract normative system
In this section, we introduce the notion of prioritized abstract normative system (PANS) and three different approaches to compute what normative conclusions hold (referred to as an extension). A PANS captures the context of a system and the normative rules in force in such a system, together with a set of permissive norms which identify exceptions under which the normative rules should not apply. There is an element in the universe called >, contained in every context, and in this paper we consider only a finite universe. A PANS also encodes a ranking function over the normative rules to allow for the resolution of conflicts. Tosatto et al.  introduce a graph based reasoning framework to classify and organize theories of normative reasoning. Roughly, an abstract normative system (ANS) is a directed graph, and a context is a set of nodes of the graph containing the universe. In a context, an abstract normative system generates or produces an obligation set, a subset of the universe, reflecting the obligatory elements of the universe.
when g ∗ cannot be obtained explicitly from its definition (1.3). Namely, in a number of cases and in contrast with g ∗ , the function φ
p (λ; y) obtained by using the Laplace approximation of the conjugate function g ∗ by L p -norms, can be computed in closed-form explicitly. Examples of such situations are briefly discussed. In the general case, φp is of the form h1(λ; y) + h2(λ; p) where: for every λ ∈ ri D fixed, h2 (λ; p) → 0 as p → ∞, and for each fixed p, the function λ 7→ h2(λ; p) is a barrier for the domain D. This yields to consider the optimization problem
numbers µ norm and κ (for complex and real problems, respectively) and prove that the
main properties of µ norm and κ —those allowing them to feature in condition-based cost
estimates— hold for M and K as well.
We conclude the paper, in Section 6, with a minor digression. Because a natural habitat for functional norms are spaces of continuous functions we consider extensions of the real condition number κ to the space C 1 [q] := C 1 (S n , R q ) and we prove (somehow unexpectedly) Condition Number Theorems for these extensions. We do not analyze algo- rithms here. We nonetheless point out that substantial literature on algorithms on spaces of continuous functions exists [51, 44, 42] where these theorems might be useful.
Neuropsychological assessment plays a key role in a variety of clinical contexts. Indeed, neuropsychological examination is essential in the diagnosis of dementia as well as in most neurological disorders such as stroke, traumatic brain injury or multiple sclerosis. Identification of low performances relies on the use of appropriate normative scores which allow clinicians determining whether a score is in the expected range. Such norms are available for a wide panel of tests for different age ranges. Nevertheless, the middle-aged group is rarely represented. Yet, dementia does not exclusively affect people over the age of 65. Early-onset dementia refers to people diagnosed with dementia under 65. A review relying on 11 studies estimated that between 6.9% to 45.3% of all patients diagnosed with dementia had early-onset dementia . In particular, frontotemporal dementia is diagnosed in 10% of cases before 45 years of age, and in 60% of cases between 45 and 64 . Furthermore, such normative data are very important to document cognitive decline that may be present long before dementia onset . Beyond dementia, other neurological disorders may be experienced by middle aged adults. For instance, persistent cognitive deficits have been reported to affect over 20% of adults one year post-brain injury  and from 11% to 30% of adults one year poststroke . Despite this matter of fact, most studies having computed norms for neuropsychological tests did not include persons below 65 .
de Bordeaux 21 (2009), 735-742
Absolute norms of p-primary units
par Supriya PISOLKAR
Résumé. Nous prouvons un analogue local d’un théorème de J. Martinet sur la norme absolue du discrimant relatif d’une ex- tension de corps de nombres. Ce résultat peut être vu comme un énoncé sur les unités 2-primaires. Nous prouvons également un résultat similaire pour la norme absolue des unités p-primaires, pour tout p premier.
SMALL BALL ESTIMATES FOR QUASI-NORMS
OMER FRIEDLAND, OHAD GILADI, AND OLIVIER GU´ EDON
Abstract. This note contains two types of small ball estimates for random vectors in ﬁnite dimensional spaces equipped with a quasi-norm. In the ﬁrst part, we obtain bounds for the small ball probability of random vactors under some smoothness assumptions on their density functions. In the second part, we obtain Littlewood-Oﬀord type estimates for quasi-norms. This generalizes a result which was previously obtained in [FS07, RV09].
Proof-theoretic semantics, a flourishing and thriving domain of research (see Francez (2015); Schroeder-Heister (2018)), is built on the (Wittgenstein) thesis that use determines meaning, and that therefore the meaning of logical con- nectives is determined by their (logical) use in inference rules. In particular, there exists a branch of proof-theoretic semantics, mainly developed by Doˇsen (2019); Doˇsen and Petr´ıc (2011) and recently taken up by Restall (2019), which aims at identifying in a precise mathematical manner those formulas of a cer- tain logic L that have the same meaning according to this conception: that is, those formulas that behave identically in the inference rules of L. Such formu- las are called isomorphic formulas of L.
amount of formal care received by their parents with a larger provision of informal care. This paper deals with the gender gap in care provision and analyzes the choices within the family which contribute to its emergence. Our explanation is based on two factors. First, sons and daughters have unequal job market opportunities which determ- ine their opportunity cost of providing care. Second, there is a social norm according to which society expects daughters to be the main caregivers of their parents and which imposes a utility cost on daughters who deviate from this norm. Gender di¤erences in wages are well documented and continue to exist in all OECD countries, where women with a median wage earn on average 15% percent less than their male counterpart (see for instance, O’Neill, 2003; Fortin, 2005; Blau and Kahn, 2006). The role of social norms is empirically more di¢ cult to assess; they represent by their very nature a less tangible concept than opportunity costs. Haberkern and Szydlik (2010) …nd that the extent to which providing informal care to needy family member is considered a moral obligation varies between countries. They show that of those aged 65+ in the northern countries interviewed by SHARE believed that the state should bear the primary responsibility for LTC, while in Mediterranean countries, the majority believed that the family should mainly be responsible. Further, they show that where the consensus that care is a fam- ily matter is strongest, the share of informal care provided by daughters is also largest. On the other hand, institutions and social norms did not in‡uence care relationships with sons. In a similar vein, Kotsadam (2011) …nds that there is link between gendered normsand informal care provision by women, and that the strength of this link var- ies within European countries and is strongest for Germany and Southern European countries. Our model explains how these documented facts a¤ect families’ LTC ar- rangements, shows that they re‡ects an ine¢ cient equilibrium, and studies potentially welfare improving policies.
Burden of proof guidelines apply to large classes of cases, irrespective of the detailed information only available at the court level. Hence, guidelines will not always ensure coordination on the e¢ cient equilibrium. This leads us to inquire whether a modi…ed court procedure can eliminate the need for guidelines. Up to now, our stylized court involved a purely “passive” adjudicator whose only role is to decide at the close of the proceedings. The modi…ed procedure, as in the more “inquisitorial”trials of civil law countries, allows the adjudicator to intervene during the proceedings by interrogating the parties directly and purposely shifting the burden of proof. Speci…cally, the adjudicator announces how he will rule should no additional evidence be forthcoming (both binding and non binding announcements are considered). We show that the optimal liability assignment then obtains as the unique equilibrium if the active adjudicator abides by the preponderance standard and common law exclusionary rules. The interpretation is that, with a more
the contradiction of a belief as errors in data.
Based on the proposition of , in the most recent approach to unexpected association rule mining presented by , a belief is represented as a rule with the form X → Y , and a rule A → B is unexpected to the belief X → Y if: (a) B and Y logically contradict each other, denoted by B AN D Y |= F ALSE; (b) the rule A ∪ X → B satisfies given support/confidence threshold values; (c) the rule A∪X → Y does not satisfy given support/confidence threshold values. The mining process is done by the a priori based algorithms that find the minimal set of unexpected association rules with respect to a set of user defined beliefs.  proposed a framework based on domain knowledge and beliefs for finding unexpected sequence rules from frequent sequences. The author first intro- duced the generalized sequence g1 ∗ g2 ∗ . . . ∗ g n so called “g-sequence” where g1, g2, . . . , gn are elements of sequence and ∗ is a wildcard. The author then proposed the sequence rule by splitting a g-sequence into two adjacent parts: a premise part LHS and a conclusion part RHS, denoted as LHS !→ RHS. A belief over g-sequence is a tuple $LHS, RHS, CL, C% where CL is a conjunc- tion of constrains on the statistical frequency of LHS and C is a conjunction of constraints involving elements of LHS and RHS. For example, as intro- duced by , let belief $a ∗ b, c, CL, C% be a belief with CL = (support(a ∗ b) ≥ 0.4 ∧ conf idence(a, b) ≥ 0.8) and C = (conf idence(a ∗ b, c) ≥ 0.9). This belief states that the LHS of the sequence rule a∗b !→ c should appear in at least 40% of sequences, the confidence of the belief given a should be at least 0.8 while the RHS confidence should be at least 0.9. So that a sequence rule is expected if it confirms to a belief in terms of statistics of content. Finally the unexpected rules are grouped by the semantics of there unexpectedness and can be used for creating new rules.
Keywords: Gradual rules, Fuzzy interpola- tion, Level 2 fuzzy sets
The main purpose of this paper is to further investi- gate the interest of gradual rules  for modelling in- terpolative reasoning. What is supposed to be known, in a precise or in an imprecise way, is the behaviour of a system at some points or in some areas, the problem being to interpolate between these regions. The pro- posed rule-based approach is an alternative to works based on fuzzy polynomial  or fuzzy spline inter- polation , , which rely on fuzzy-valued func-
In this paper, we made some steps towards a better under- standing of the interaction between transitivity and decidable classes of existential rules. We obtained an undecidability re- sult for aGRD+trans, hence for fes+trans and fus+trans. More positively, we established decidability (with the lowest pos- sible complexity) of atomic CQ entailment over linear+trans KBs and general CQ entailment for safe linear+trans rule sets. The safety condition was introduced to ensure termination of the rewriting mechanism when predicates of arity more than two are considered (rule sets which use only unary and binary predicates are trivially safe). We believe the condition can be removed with a much more involved termination proof.
2.4.1. Main experiment with both allocentric (visual) and egocentric (oculomotor) spatial cues
Subjects were tested individually in a memory-based laser- pointing task, in presence of the experimenter. The sequence of events constituting a single trial is illustrated in Fig. 1 C. At the beginning of each trial, the subjects were asked to fixate a small (0.5° diameter) red dot appearing on the screen. The position of this fixation dot was randomly selected within an imaginary rec- tangle (30° 15°) centered on the middle of the screen. The fixa- tion dot was displayed for a random time between 1.4 and 2 s. After that period, the fixation dot disappeared and immediately reappeared 25° away along a randomly selected direction. Subjects were instructed to execute a saccade toward this new location, and to maintain fixation until the disappearance of the fixation dot, which occurred after a random duration between 1 and 1.5 s. The subjects had then to indicate the initial location of the fixation dot by pointing on the screen with a laser pointer. They were asked to keep the pointer at this location until the experimenter, located far behind them, clicked with an optical mouse at the indicated location in order to record the pointing response. One of the 230 pictures was displayed on the screen dur- ing both the initial fixation and the pointing period. During the ini- tial fixation, the picture was presented briefly, for 200 ms, just before the saccade occurs. A mask, whose diameter was randomly selected among four possible values (0°, 5°, 10° and 20°), sur- rounded the fixation dot, occluding a variable portion of the picture around the center of gaze ( Fig. 1 B). Because the mask was always centered on the target (the fixation dot), its size determined the target’s distance from the closest potential landmarks within the pictures (0°, 2.5°, 5° and 10°) independently of possible fluctua- tions in fixation quality between trials. Importantly, even the big- gest
3 Related work
Several approaches exist to connect documents to source code. Witte et al. [6, 7] propose an approach to extract semantic information from source code and documents and populate an ontology using the results. They use native lan- guage processing to analyze technical documentation and identify code elements in the documentation. To connect the documents together, they use a fuzzy set theory-based coreference resolution system for grouping entities. Rule-based re- lation detection is used to relate the entities detected. The text mining approach achieves 90% precision in the named entities detection. However after adding the source code analysis, the precision drops to 67%.
• Ŵ ⊕ i Y h R |∼ ǫ R ,
• Ŵ ⊕ i Y h R 6|∼ ǫ R ′ , ∀ R ′ ∈ Y .
10. Conclusions and future work
The contribution of this paper is at least twofold. First, a unified framework has been presented that allows both monotonic knowledge and defeasible rules to be represented and reasoned about in a uniform way. Derivation tools have been defined allowing to reason and infer both kinds of knowledge indifferently. The next step will be to address algorithmic aspects of X-derivations and associated inference, within the propositional setting. Also, the X-derivation concept implements the possibility to state defeasible rules as extra assumptions, which are coming in addition to the defeasible character of rules with exceptions. We believe that this two-level form of hypothetical reasoning could be further explored and refined. Also, a whole family of forms of implicants could be devised for defeasible rules, depending on the actual form of reasoning that is modeled and on the intended actual epistemological roles of the involved exceptions, premises and conclusion. Second, this framework has been exploited to solve a specific problem in knowledge representation and reasoning that has not received much attention so far. Namely, how could new information override the relevant subsuming information which is currently available? We claim that such an issue should not be taken for granted. Indeed, in real life we do often get new knowledge that is logically weaker but that appears more informative than the previously recorded one, and should therefore be preferred.
if there exists an endogenously given priority structure such that this rule chooses the same allocations that the deferred acceptance algorithm …nds using that priority structure. Our main result is that a rule satis…es e¢- ciency, strategy-proofness, and reallocation-consistency if and only if it is an e¢cient priority rule. In other words, any rule satisfying our combination of axioms is a best rule for an endogenously given acyclical priority structure. Here the third property is a stability condition requiring that when a set of agents leaves with their allotments, then their assignments should remain unchanged when applying the same rule to the reallocation problem that consists of these agents and their allotments. For instance, in Germany medical students are assigned to universities through a centralized rule each year. Some students may wonder if they can improve their assignments by reallocation among themselves. However, if their positions are reallocated among themselves using the same rule, then by reallocation-consistency the assignments would not change.