• Aucun résultat trouvé

Progressive Semantic Query Answering

N/A
N/A
Protected

Academic year: 2022

Partager "Progressive Semantic Query Answering"

Copied!
16
0
0

Texte intégral

(1)

Giorgos Stamou, Despoina Trivela, and Alexandros Chortaras

School of Electrical and Computer Engineering, National Technical University of Athens, Zographou Campus, 15780, Athens, Greece

{gstam,achort}@cs.ntua.gr despoina@image.ntua.gr

Abstract. Ontology-based semantic query answering algorithms suffer from high computational complexity and become impractical in most cases that OWL is used as a framework for data access in the Seman- tic Web. For this reason, most semantic query answering systems prefer to lose some possible correct answers of user queries rather than being irresponsible. Here, we present a method that follows an alternative di- rection that we call progressive semantic query answering. The idea is to start giving the mostrelevant correct answers to the user query as soon as possible and continue by giving additional answers withdecreas- ing relevance until you find all the correct answers. We first present a systematic analysis that formalises the notion of answer relevance and that of query answering sequences that we call strides, providing a formal framework for progressive semantic query answering. Then, we describe a practical algorithm performing sound and complete progressive query answering for the W3C’s OWL 2 QL Profile.

Keywords: scalable query answering, tractable reasoning, approximate query answering, semantic queries over relational data, OWL 2 Profiles

1 Introduction

The use of ontologies in data access is based on semantic query answering, in particular on answering user queries expressed in terms of a terminology (that is connected to the data) and usually represented in the form of conjunctive queries (CQs) [6, 1]. The main restriction of the applicability of the specific ap- proach is that the problem of answering CQs in terms of ontologies represented in description logics (the underlying framework of the W3C’s Web Ontology Lan- guage -OWL) has been proved to be difficult, suffering from very high worst-case complexity (higher than other standard reasoning problems) that is not relaxed in practice [6]. This is the reason that methods and techniques targeting the development of practical systems mainly follow two distinct directions. The first suggests the reduction of the ontology language expressivity used for the rep- resentation of CQs vocabulary, while the second sacrifices the completeness of the CQ answering process, providing as much expressivity as possible. Meth- ods of the first approach mainly focus on decoupling CQ answering into (a) the

(2)

use of terminological knowledge provided by the ontology (the reasoning part of query answering) and (b) the actual query execution (the data retrieval), thus splitting the problem into two independent steps [1, 4, 10]. During the first step (usually calledquery rewriting) the CQ is analysed with the aid of the ontology and expanded into a (hopefully finite) set of conjunctive queries, using all the constrains provided by the ontology. Then, the CQs of the above set are pro- cessed with traditional query answering methods on databases or triple stores, since terminological knowledge is no longer necessary. The main objective is to reduce the expressivity of the ontology language until the point that the proce- dure guarantees completeness. Late research in the area, introduced the DL-Lite family of description logics, underpinning W3C’s OWL 2 QL Profile [4, 3], in which the CQ answering problem is AC0w.r.t. data. Despite the obvious advan- tage of using the mature technology (more than 40 years research) of relational databases, there are also other major technological advantages of the specific approach, most of them connected with the use of first-order resolution-based reasoning algorithms [7][5, Ch.2]. The main restriction is that in the presence of large terminologies, the algorithm becomes rather impractical, since the ex- ponential behaviour (caused by the exponential query complexity) affects the efficiency of the system.

The attempts to provide scalable semantic query answering over ontologies expressed in larger fragments of OWL introduced the notion of approxima- tion. Approximate reasoning usually implies unsoundness and/or incomplete- ness, however in the case of semantic query answering most systems are sound.

Typical examples of incomplete query answering systems are the well-known triple stores (Sesame, OWLIM, Virtuoso, AllegroGraph, Mulgara etc). The two main characteristics distinguishing incomplete semantic CQ answering systems is how efficient andhow incomplete they are. The efficiency of semantic query answering is usually tested with the aid of real data of a specific application or using standard benchmarks [8]. Lately, a systematic approach of the study of incompleteness of semantic query answering systems has been also presented [9].

A major issue here is that the user should be aware of the type of lost correct answers, i.e. there should be a general deterministic criterion expressing a type ofrelevance, indicating how crucial is the loss of each correct answer.

Within the above framework, herein we present an alternative direction in scalably solving the problem of semantic query answering ensuring a safe ap- proximation process that hopefully converges to a complete solution. The idea is to provide the user with correct answers as soon as they are derived and continue until all the correct answers are found, ensuring that the relevant correct answers will be first given. For example, in the case of the query rewriting approach this means that instead of clearly splitting the steps of query rewriting and query processing, whenever a new rewriting is found it can be evaluated against the database and the results can be presented to the user. In order for this idea to be successfully applied, several intuitive requirements should be fulfilled: the first correct answers should be given very fast; an important amount of correct answers should be found in a first small percentage of execution time; complete-

(3)

ness should be ideally reached (or at least approximated); correct answers should be given following a degree of importance, according to a semantic preference criterion; the results should not be widely replicated.

In the present paper, we provide a systematic approach formalising the above idea. We introduce progressive semantic query answering based on the notion of CQ answering strides that are flows of correct answers with specific proper- ties that formalise the intuitive meaning of the above criteria. We then provide a practical progressive semantic CQ answering algorithm that has some nice prop- erties and is complete in OWL 2 QL and present the results of its implementa- tion and evaluation (we call the implemented systemProgRes). The algorithm is based on a query rewriting resolution-based procedure that computes a sequence of rewritings, the elements of which have a decreasing importance according to a query similarity criterion (measuring the similarity of the rewriting with the user CQ). The order is proved to be ensured under a specific resolution rule applica- tion strategy thatProgResfollows. It is interesting that although the results are ordered (ranked) and given during the execution, the efficiency of the algorithm is not worse than other similar ones (like the one implemented inRequiem[7]).

2 Preliminaries

The most relevant with the problem of query answering OWL QL Profile is based on DL-LiteR[1, 4]. A DL-LiteR ontology is a tuple!T,A", whereT is the terminology(usually called TBox) representing the entities of the domain andA is the assertional knowledge (usually called ABox) describing the objects of the world in terms of the above entities. Formally,T is a set of terminological axioms of the formC1#C2orR1#R2, whereC1,C2are concept descriptions andR1, R2are role descriptions, employing atomic concepts, atomic roles and individuals that are elements of the denumerable, disjoint sets C,R,I, respectively. The ABox A is a finite set of assertions of the form A(a) or R(a, b), where a, b ∈ I, A ∈ C and R ∈ R. A DL-LiteR-concept can be either an atomic one or

∃R.&. Negations of DL-LiteR-concepts can be used only in the right part of subsumption axioms. A DL-LiteR-role is either an atomic role R ∈ R or its inverse R. Aconjunctive query (CQ) has the form q : Q(x) ← !n

i Ci(x;y), whereQ(x) is the answering predicate (thehead of the CQ), employing a finite set of variables, called distinguished, and the conjuncts Ci(x;y) (forming the body of the CQ) are predicates possibly employing non-distinguished variables.

We say that a CQqis posed over an ontologyOiff all the conjuncts of its body are concept or role names occurring in the ontology. A tuple of constants a is a certain answer of a conjunctive queryQposed over the ontologyO=!T,A"

iff O ∪q |= Q(a), considering q as a universally quantified implication under the usual FOL semantics. The set containing all the answers of the queryqover the ontology O is denoted with cert(q,O). A union of CQs q" is a rewriting of q over a TBox T iff cert(q",!∅,A") =cert(q,O), with O = !T,A" and A any ABox. We will denote the set of all CQs in the rewriting of q over the TBox

(4)

T by rewr(q,T). With a little abuse of notation we write rewr(q,O), meaning rewr(q,T) (T is the TBox ofO).

3 Semantic query answering strides

Semantic CQ answering systems are based on sophisticated algorithms that try to find as many certain answers of CQs as possible. Formally, any procedure A(q,O) that computes a set of tuplesafor a CQqposed over an ontologyOis aCQ answering algorithm (CQA algorithm).A(q,O) issound iffres(A(q,O))⊆ cert(q,O) andcompleteiffcert(q,O)⊆res(A(q,O)) (res(+) is the result of any al- gorithm+). Any procedureR(q,T) computing a set of rewritings ofqover a TBox T is aCQ rewriting algorithm.R(q,T) can be the basis of a CQ answering algo- rithmA(q,O) with the aid of a procedureE(q,A) that evaluates the query and re- trieves the data from the database. In this case, we writeA(q,O) = [R|E] (q,O).

Obviously, it isres([R|E] (q,O)) =res(E(res(R(q,T)),A)). With a little abuse of notation, we freely writeA(U,O),R(U,T) andE(U,A) for procedures computing answers to setsU of CQs. A natural question arising in cases where scalability is a major requirement is how to split the execution of a CQ query answering algo- rithm into parts enabling a progressive extraction of certain answers. Also, how to control the progress of the algorithm ensuring that certain answers extracted until any specific time have desirable characteristics. Let us now proceed to the definitions that form the framework of progressive CQ answering, covering the above intuition.

Definition 1. Progressive CQ answering (PCQA)

Any sequenceP(q,O;i) ={Aj}i,i∈N,i >1 whereAj are CQA algorithms for a query q posed over an ontology O = !T,A", is a progressive CQ answering algorithm. The elements ofPare itscomponets. IfPis finite, we writeP(q,O) = [A1;A2;· · ·;An] (q,O).

It is important to notice that the components of PCQA algorithms can freely change inputs and outputs in a forward manner, leaving the sequence P(i) to possibly have a recursive character. The restriction is that any component should be considered as a CQA algorithm meaning that, at any time point, we can ask for the result of any subset of the components being able to compute it (possibly using the output of the previous components). We say thatP(q,O) is sound iff res(P)⊆cert(q,O) andcomplete iffcert(q,O)⊆res(P). Since our intention is to provide users with correct answers during the execution of P, we need refer to the result set of a subset of components ofPstratifying the desired answer flow.

Definition 2. PCQA Strides

Let P(q,O) be a PCQA algorithm. Astride s(P;i:j),i, j∈N, i≤j of P(if it is infinite j can be equal to ∞) is the result of the execution of its components Ai toAj, i.e.s(P;i:j) ="

[i,j]res(Ak).

Let Σ(P) denote the set of all strides of P. Obviously, res(A)∈ Σ(P), for any componentAofPand alsores(P)∈Σ(P). We will refer to the former as asingle

(5)

stride and to the latter as the total stride. A stratification of P is a sequence s1,s2, ...,sk of strides of P. Now, we turn our attention to the study of PCQA algorithms the strides of which provide answers with decreasing relevance to the user query posed. The first step is to rank the elements of the strides according to the query posed by the user. Letσ(O;a, q) be a relevance measure expressing the importance of the tuple a∈s, with s∈Σ(P), i.e. the degree in which the specific tuple ideally (and with the less risk) fits to the user query.

The nature of the relevance measure may differ from one application to an- other. In the present paper we consider the case of a measure representing the possibility of correctness of a specific answer. Similar measures play an impor- tant role in information retrieval systems, in the process of ranking the results of query answering (see for example [12, 14–16]). Intuitively, here we consider that some axioms of the TBox are not always valid for the specific data, i.e. some exceptions may exist in the large data set making the specific axiom typically inconsistent. The effect is that some answers derived from the query answering process are not correct. This could be a result of untrusted knowledge or by the nature of the specific axiom that is only usually valid. For example, although we know that all professors are teaching some courses, there could be the case that some professors (for some reasons) are not teaching. The problem could be solved if we had some additional information about the correctness of the TBox axioms (the provenance of the specific axiom implying a degree of trust etc). If we do not have any additional information, the only rule that we could follow is that the answer is more safe if we use less knowledge to derive it, i.e. the less reasoning the most relevance. We will further clarify this later that we focus to query rewriting PCQAs.

Definition 3. Stride Ordering

LetP(q,O)be a PCQA algorithm andΣ(P)the set of its strides after the execu- tion of the queryqover the ontology O. Let also σ(a, q)be a relevance measure of the elements of the total stride and the CQs. We say that a stride s1(P;i:j) σ-precedes a strides2(P;i" :j")for the query qand we write s1.qσ s2 iff for all a1∈s1,a2∈s2 we haveσ(a1, q)≥σ(a2, q). If σ(a1, q)> σ(a2, q), we say that s1 strictlyσ-precedess2 and we writes1qσs2. If neithers1.qσs2 nors2.qσs1, we have s1qσs2 (they areσ-irrelevantforq).

It is not difficult to see thatΣ(P) forms a lattice under any≺qσ operator, where

∅ is its infimum andcert(q,O) is its supremum (supposing thatPis sound and complete). Now that we have an ordering of strides, we are ready to proceed to the definition of progressive algorithms the strides of which are ordered. Al- gorithms of this type ensure that the answer blocks are ordered according to a specific relevance measure. Therefore, they can be considered as approximation algorithms with deterministically controllable behaviour.

Definition 4. Sorted PCQA Algorithms

Let P(q,O)be a PCQA algorithm, #=s1,s2, ...,sk a stratification of Pand≺qσ

an ordering relation over Σ(P). We say that Pis σ-sorted under # iff for any successive strides si,sj of # it is si .qσ sj. It is strictly σ-sorted iff siqσ sj

otherwise it isσ-unsortedforq.

(6)

Example 1. Consider the simple DL-LiteR ontologyO=!T,A", with T ={PhDStudent#Researcher,Professor#ResDirector,

SeniorResearcher#ResCoordinator,ResCoordinator# ∃advise.Researcher, ResDirector# ∃advise.SeniorResearcher,supervise#advise}

A={Mary:Researcher, Bill:ResCoordinator,John:ResDirector Alan:Researcher,George:PhDSudent,Peter:SeniorResearcher,

Sofia:Professor, Ema:Professor, (Bill,Mary) :advise,(John,Bill) :advise, (Peter,George) :supervise,(Alan,Peter) :advise}

A represents a materialisation of a relational database or a triple store. Let q(x) ← advise(x, y)∧advise(y, z). It is not difficult to see that cert(q,O) = {Alan,John,Sofia,Ema}.Johnis a direct result from the ABox, since it is explic- itly given that he advisesBill, who advisesMary. It is a bit more difficult to derive Alan since some of the knowledge should be employed. Specifically, we should consider thatAlanadvisesPeterthat supervisesGeorge(who is aPhDStudentand thus a Researcher), which means that Peter advises George. More complex de- ductions lead to the conclusion thatSofiaandEmaare also answers of the query.

PCQA algorithms ensure that the results are given in a specific order that the user knows before the query answering process, according to a specific relevance criterion. For example, we could develop P=A1;A2;A3;A4;A5 with a stratifi- cations1;s2;s3, wheres1(P; 1 : 1)) =res(A1) = {John},s2(P; 2 : 3) =res(A3)∪ res(A4) = {John,Alan} and s3(P; 4 : 5) = res(A5∪A5) = {John,Sofia,Ema}. Here, we consider that data is more strong than the knowledge, meaning that there could be possible exceptions in some restrictions expressed as TBox ax- ioms. Thus, an obvious solution is thatJohnshould be the first answer (explicitly given),Alanemploys a bit more risk (some reasoning is needed), whileSofiaand Emaare the most risky answers (for example it could be the case thatSofiadoes not advise anyone for some reasons). It is not difficult to see that P is sorted according to the above intuitive measure.

4 Practical progressive query answering

The problem of progressive query answering is more difficult than the non- progressive one, since the extracted results should be ranked according to a specific criterion and the ranking should be ensuredduring the query answering process (without knowledge of all the results). Obviously, the difficulty strongly depends on the expressivity of the ontology language and the specific relevance measure. Here, we focus on query rewriting PCQA algorithms for DL-LiteR, i.e.

on cases where the components of P are based on query rewriting algorithms.

We follow a resolution-based FO rewriting strategy (more details can be found in [7]). Intuitively, algorithms of this category are based on the translation of the terminology into a set of FOL axioms and the repeated application of a set of resolution rules employing the query and the FOL axioms until no rule can

(7)

be applied. It is ensured that when the algorithm stops it has produced all the rewritings of the user query. The idea is to stratify the application of the reso- lution rules in order to ensure that the rewritings will be extracted according to a specific order, following a similarity criterion in comparison to the user query.

Thus, we can evaluate against the database the fresh queries (rewritings) as they are derived and provide the user with a set of fresh answers. The proposed algorithm remains sound and complete and, although it solves a more difficult problem, in several cases it is more efficient than the respective non-progressive algorithm proposed in [7].

We will use a graph representation of the CQs that is more convenient for the definition of similarity measures. Let q : Q(x) ← !n

i Ci(x;y) posed over an ontology O. Since the conjuncts of q are entities of the terminology, they can only employ one (if they are concepts) or two (if they are roles) variables (distinguished or not). A non-distinguished variable that appears only once in the query body is called unbound, otherwise it is bound. We represent a CQ as an undirected graph Gq(Vq,Eq,LVq,LEq), where Vq is the set of nodes repre- senting the variables of the query, Eq is the set of edges,LVq is the set of node labels (representing the set of concept conjuncts employing each variable) and LEq =!L+Eq,LEq"is the tuple of sets of edge labels (representing the set of role conjuncts employing each variable pair inL+Eq and its inverse inLEq). The nodes corresponding to unbound variables are called blank nodes. Moreover, let Vq(b) (Vq(u)) denote the set of bound (unbound) variable nodes andEq(b)(Eq(u)) denote the set of edges employing two bound (at least one unbound) variable node(s).

Obviously,LVq(x) =∅and|L+Eq(x, y)∪ LEq(x, y)|= 1, for eachx∈ Vq(u),y∈ Vq. We can easily extend the above notations to represent general terms instead of simple variables. The only issue that needs more attention is that in this case a node is blank only if it represents a term employing functions and variables that do not appear in any other term. For simplicity, we can delete the blank nodes of the graph and add the role name that appears in L+Eq(x, y) (or its inverse if it appears in LEq(x, y)) in the label of the node x that is connected with the specific blank node. In this case, the node label sets can also include role names (or inverse role names). We will make use of this simplifying convention in the sequel, in the description of our algorithm. There are several graph similarity measures (relations between graphs that are reflexive and symmetric) proposed in the literature, especially in the framework of ontology matching [11, 13]. Here, we will introduce a new measure that captures the intuitive meaning of the query similarity that is useful in query rewriting PCQA algorithms.

Definition 5. Query Similarity

Letq1andq2be two non-empty conjunctive queries andGq1(Vq1,Eq1,LVq1,LEq1), Gq2(Vq2,Eq2,LVq2,LEq2)their graph representations. Their similarity can be de- fined as follows (up to variable renaming):

σ(q1, q2) = 1−

λv!

vmin+#min + (δv#)

|Vq(b)1 |+|Vq(b)2 |+|Eq(b)1 |+|Eq(b)2 |+ 1 (1)

(8)

where vmin = min (|Vq(b)1 |,|Vq(b)2 |), #min = min (|Eq(b)1 |,|Eq(b)2 |), δv = |Vq(b)1 3Vq(b)2 |, δ#=|Eq(b)1 3Eq(b)2 |(3computes the non-common elements of the two sets), λv=

|#

x:x∈ Vq(b)1 ∧x∈ Vq(b)2 ∧$

LVq1(x)4=LVq2(x)∨ L(u)q1 (x, y)4=L(u)q2 (x, y)%&

|and λ#=|{(x, y) : (x, y)∈ Eq(b)1 ∧(x, y)∈ Eq(b)2 ∧ LEq1(x, y)4=LEq2(x, y)}|.

The first term of the numerator of the similarity measure (Eq. 1) captures the node and edge labeling differences, the second term the structure difference, while the denominator normalises the values between 0 and 1. The main intuition behind Eq. 1 is that the labeling differences should be counted as a secondary dissimilarity cause, while the primer one should be the difference in structure.

Thus, the maximum value of the first term of the fraction should not exceed the value of the second term (cannot exceed the value 1, which is the lowest structural difference). The computation of differences ignores all the unbound variables (the blank nodes of the graph), that are only involved in the computation of λv (summarising the node label differences). The intuition behind this is that blank nodes are introduced only by unqualified existentials (∃R.&) and thus the specific unbound variable could be rejected without any problem if we just remember the role of the existential. It is important to notice that in the presence of role inverse, the bound variable could be either in the subject or in the filler part of the role; this is the reason why we consider undirected graphs andL(u)q (considering bothL(u)+q andL(u)q ) is involved in the computation ofλv. Finally, we should notice that with a little abuse of notation, we introduced the similarity between two queries and not between queries and answers (as imposed in the previous section), implicitly meaning that the similarity is between the first query and the answers of the second one. In this way, we significantly simplified the notation in the case of query rewriting based PCQA algorithms.

Table 1. Translation of DL-LiteR axioms into clauses of Ξ(O). (Note: A different functionf must be used for each axiom involving an existential quantifier.)

Axiom Clause Type Axiom Clause

A!B B(x)←A(x) (1)

P!R

S(x, y)←P(x, y) (2) P !R

S(x, y)←P(y, x)

R!P R!P

∃P !A A(x)←P(x, y) (3) ∃P!A A(x)←P(y, x) A! ∃P P(x, f(x))←A(x) (4) A! ∃P P(f(x), x)←A(x) A! ∃P.B P(x, f(x))←A(x) (4)

A! ∃P.B P(f(x), x)←A(x)

B(f(x))←A(x) (5) B(f(x))←A(x)

We are now ready to introduce a PCQA algorithm, which we callProgResAns and is sorted according toσ(q1, q2). In order to compute the query rewritings of a user queryq, the algorithm employs a set of resolution rules. The main premise is always a query (q or a subsequently computed query rewriting) and the side

(9)

premise a clause of Ξ(O). Ξ(O) is obtained from ontology O as described in Table 1 [7]. The idea of the algorithm is the following: start from the user query;

apply the resolution rule using side premises that preserve the structure of the query (we call this step structure preserving resolution or sp-resolution); apply the resolution rule using side premises that minimally change the structure of the query (we call this stepstructure reducing resolutionorsr-resolution); apply anew sp-resolution to the query rewritings produced by sr-resolution, and so on; at each step evaluate the queries against the database and provide the user with the results. sp- and sr-resolution are performed by procedures sp-Resolve andsr-Resolve, respectively.sp-Resolvetakes as input a queryq and an element s of Gq (i.e. either a node or an edge), and computes all possible rewritings of q, by iteratively applying the resolution rule using as side premises the clauses of Ξ(O) that are of type (1)-(4) (see Table 1). Initially, the main premise is q and the resolution rule is applied for all atoms in q that correspond to s.

The same process is iteratively applied to all the resulting rewritings, until no more rewritings can be obtained. The clauses of type (4) are used only if the skolemized termf(x) unifies with an unbound variable of the main premise.sp- Resolvepreserves the query structure, since resolution with clauses of type (1) or (2) affects only the setsLV andLE, respectively. Clauses of type (3) and (4) introduce and eliminate blank nodes.

sr-Resolvetakes as input a queryqand an elementsofGq (i.e. either a node or an edge) and computes all possible rewritings ofq, by iteratively applying the resolution rule using as side premises the clauses ofΞ(O) that are of type (4) and (5). Clauses of type (4) are used only iff(x) unifies with a bound variable of the query. In terms of graphs,sr-Resolvedeletes a node together with the edges that connect it to the rest of the graph.ProgResAnsappliessr-Resolveonly on selected atoms (the atoms that correspond to the elementss ofGq mentioned above) of q, called thesink node atomsofq. Sink nodesscorrespond to non distinguished terms and are determined by the following condition concerning their labels:

L±E(s, y) =∅ and LE(s, y)≥1. The selective application of the resolution rule only to the sink node atoms is proved to suffice for the production of eventually all the possible query rewritings which can be evaluated against the database (i.e. query rewritings that do not contain functional terms).

We now provide the full definition of ProgResAns. Its components are the query rewriting procedures (sp-Rewriteand sr-Rewrite) and the procedureEval, which evaluates a set of rewritings against the database:

ProgResAns=Eval;{{[sp-Rewrite|Eval]}ni=1j ; [sr-Rewrite|Eval]}mj=1

sp-Rewriteandsr-Rewrite are defined by algorithms 1 and 2.sp-Rewriteem- ploys sp-Resolveto perform exhaustive sp-resolution for all queries in the input setQspin. Therefore, the queries inres(sp-Resolve) have the same structure (up to blank nodes) with the queries inQspin, but each one of them is the result of the ex- haustive application of sp-resolution on a single node or edge. The node or edge whose label is modified is annotated, so that by recursively applyingsp-Resolve we can compute all possible rewritings that have the same number of different

(10)

labels. In particular, the queries computed by thek-th recursive application of sp-Resolvediffer inklabels w.r.t the queriesQspinof the first application. Finally, sr-Rewriteusessr-Resolveto compute the rewritings that have a single structural change w.r.t.Qsrin.

Data: Set of annotated conjunctive queriesQspin, ontologyO Result: Set of annotated query rewritings

Q:={};

foreachqueryqinQspindo

foreachsinVq∪ Eq that is not annotated do Q:=Q!

sp-Resolve(q,s,Ξ(O));

end end returnQ;

Algorithm 1:Proceduresp-Rewrite Data: Set of conjunctive queriesQsrin, ontologyO

Result: Set of query rewritings Qsrin :=filter(Qsrin);

Q:={};

foreachqueryqinQsrindo X :=sink-nodes(q);

foreachxinX do

if L(x) ={A1, . . . , An}(L(x)is a set of concepts)then

forall theconceptsC such that∀i= 1. . . n Ξ(O)|=Ai←Cdo Q:=Q!

sr-Resolve(q,C(x),Ξ(O));

end

else if R∈ L±(x, y)then Q:=Q!

sr-Resolve(q,R(x, y),Ξ(O));

end end end returnQ;

Algorithm 2:Procedure sr-Rewrite

ProgResAns, after evaluating first the user query, enters an (outer) loop, part of which is the (inner) loop [sp-Rewrite|Eval]. The outer loop is executed (let’s saymtimes) untilsr-Rewritecan make no further structural change to the query.

The j-th time the outer loop is executed, the inner loop is executed nj times, where nj is the number of nodes and edges of the queries in Qspinj (all have the structure and m ≤ n1) and Qspinj is the input of the first application of sp-Rewrite at the j-th iteration of the outer loop. Qspinj contains only the user query whenj = 1, otherwise it is equal to res(sp-Rewrite) obtained at iteration j−1. The input Qsrinj of sr-Rewritecontains the rewritings that are computed by the execution of sp-Rewrite. Before entering its main body, sr-Rewrite calls procedure filter on Qsrinj, which keeps only the rewritings in which sp-Rewrite

(11)

has changed only sink node atoms, so that, as mentioned before, sr-resolution is applied only on these atoms. As soon as sp-Rewriteor sr-Rewritereturn,Eval computescert(res(sp-Rewrite),O) andcert(res(sr-Rewrite),O), respectively. The following theorem can be proved:

Theorem 1. ProgResAnsterminates and it is sound, complete and σ-sorted.

Proof. The proof is omitted due to restricted space.

Example 2. (continued) Table 2 summarises the results of applyingProgResAns to the input of Example 1 (all the single strides are shown).

Table 2.Results ofProgResin the data of Example 1

Stride Query rewritings Similarity Answers

1 Q(x)←advise(x, y)∧advise(y, z) 1.000 John

2

Q(x)←supervise(x, y)∧advise(y, z) 0.952 Q(x)←advise(x, y)∧ResCoordinator(y) 0.952 John Q(x)←advise(x, y)∧ResDirector(y) 0.952 Q(x)←advise(x, y)∧supervise(y, z) 0.952 Alan Q(x)←advise(x, y)∧SeniorResearcher(y) 0.952 Alan Q(x)←advise(x, y)∧Professor(y) 0.952

3

Q(x)←supervise(x, y)∧ResCoordinator(y) 0.905 Q(x)←supervise(x, y)∧ResDirector(y) 0.905 Q(x)←supervise(x, y)∧supervise(y, z) 0.905 Q(x)←supervise(x, y)∧SeniorResearcher(y) 0.905 Q(x)←supervise(x, y)∧Professor(y) 0.905

4 Q(x)←ResDirector(x) 0.400 John

5 Q(x)←Professor(x) 0.400 Sofia,Emma

5 System evaluation

We now present an empirical evaluation of ProgRes, which implements the ProgResAns algorithm, assuming that ABoxes are stored in a relational data- base. Our goal is to evaluate the performance ofProgResand investigate whether the ranked computation of the answers introduces a significant time overhead.

Given the progressive nature of ProgRes, meaning that each rewriting should be evaluated against the database upon its computation, our implementation follows a producer/consumer approach: A thread computes the rewritings and adds them to an execution queue, while another thread implementing theEval procedure, retrieves the rewritings from the queue, translates them into SQL, dispatches them to the database, and collects the answers. We compareProgRes withRequiem(implementation of the algorithm in [7]), which is the most similar non-progressive DL-LiteR query answering system. We should point out, how- ever, that a fair comparison is not possible since we are comparing a ranking,

(12)

progressive algorithm with a non-ranking, non-progressive one. For an as fair as possible comparison, we reimplemented the greedy unfolding strategy ofRequiem within our programming framework. This was mainly done for the comparison of the time performance to be done on equal terms and did not alter in any way the Requiem algorithm (although as we shall see in the sequel in one case our implementation performed significantly better than the original one). Neverthe- less, we have to notice that the results should be interpreted only as indicative of the relative performance of the two systems.

An important issue our system had to face is the fact that, due to the real- time orientation of ProgRes, many redundant rewritings (i.e. rewritings that are structurally subsumed by subsequent, not yet computed rewritings) may be obtained. If all of them are dispatched to the database, performance may be significantly compromised. For this reason we slightly delay the addition of the new rewritings to the execution queue, by computing first all the rewritings of a stride, check for redundancies and add only the non-redundant ones to the queue. The structure of the Requiem algorithm does not allow for a directly comparable optimization strategy. In fact, the last step ofRequiemis precisely a final query redundancy check step, in which any redundant queries are discarded all together. Therefore, in our evaluation of Requiem, following the original im- plementation, all the queries are added to the queue after the entire algorithm has finished and the final redundancy check has been performed.

We present the results for the A and S ontologies [7], as well as for P5S, a modified version of P5, in which we have included some subconcepts for the PathXconcepts of P5. Briefly, ontology A is an ontology that models information about abilities, disabilities and related devices developed for the South African National Accessibility Portal. Ontology S is an ontology that models information about European Union financial institutions. Ontology P5 is a synthetic ontology developed by the authors of [7] that models information about graphs (nodes and vertices) and contains classes that represent paths of length 1-5.

We evaluate ontology A with queries AQ4 and AQ5 (of size 3 and 5, respec- tively), ontology S with query SQ5 (of size 7) and P5S with queries PQ4 and PQ5 (of size 4 and 5, respectively). Due to restricted space we do not present the results of all ontologies and queries in [7], we choose the particular combinations as most representative of several diverse cases. We populated the ABoxes accord- ing to [9]. The results are shown in Fig. 1. In each row, the lhs graphs present the total number of retrieved answers vs time and the rhs graphs the evolution of similarity as new answers are retrieved. Execution time is total time, including both query rewriting computation and answer retrieval time. The small vertical bars at the two horizontal dotted lines on the top of the lhs graphs indicate the time points at which one or more new rewritings become available for ex- ecution. The upper line corresponds toProgRes, the lower to Requiem. Table 3 presents the number of rewritings and inferences. Within parentheses are the no.

of inferences during sp-resolution (which corresponds roughly to the unfolding of Requiem) and the no. of inferences during sr-resolution (which corresponds roughly to the saturation step of Requiem). Note that some of the inferences

(13)

0 500 1000 1500 0

50 100 150 200

Time (msec)

Number of answers

0 50 100 150 200

0 0.2 0.4 0.6 0.8 1

Answer number

Similarity

0 1000 2000 3000 4000

0 50 100 150 200

Time (msec)

Number of answers

0 50 100 150 200

0 0.2 0.4 0.6 0.8 1

Answer number

Similarity

0 1000 2000 3000 4000 5000 0

200 400 600 800 1000

Time (msec)

Number of answers

0 200 400 600 800 1000

0 0.2 0.4 0.6 0.8 1

Answer number

Similarity

0 500 1000 1500 2000 2500 0

50 100 150 200

Time (msec)

Number of answers

0 50 100 150 200

0 0.2 0.4 0.6 0.8 1

Answer number

Similarity

0 5 10 15

0 20 40 60 80 100 120

Time (sec)

Number of answers

0 20 40 60 80 100 120

0 0.2 0.4 0.6 0.8 1

Answer number

Similarity

Fig. 1. Execution time and similarity. From top to bottom the graphs in each row correspond to the ontology-query pairs A-AQ4, A-AQ5, S-SQ5, P5S-PQ4 and P5S- PQ5. The thick and thin lines correspond toProgResandRequiem, respectively

(14)

that ProgRes performs during sp-resolution, in Requiem are performed during saturation. The last part of Table 3 presents the time required for the com- putation of the rewritings (not including the query execution) for ProgResand Requiem. For Requiemtwo times are given: the first refers to our implementa- tion of Requiem (which we used in all other parts of the evaluation), and the second one, within parenthesis, to the original implementation (obtained from http://www.comlab.ox.ac.uk/projects/requiem). We note that in one case, in particular for ontology S, our implementation performs significantly better.

This is due to the fact that it handles more effectively and removes at an earlier stage atoms that happen to be redundant within a single rewriting; as we shall also mention in the sequel, SQ5 happens to be a query that produces a lot of rewritings that contain several redundant atoms.

Table 3. Number of rewritings, number of inferences and query rewriting time for ProgResandRequiem.

ontology/ no. of rewritings no. of inferences rewriting time (ms) query ProgRes Requiem ProgRes Requiem ProgRes Requiem A AQ4 320 224 532 (507/25) 540 (507/25) 188 78 (156) A AQ5 624 624 1326 (1324/2) 1017 (811/206) 750 484 (625) S SQ5 128 8 892 (891/1) 1457 (1433/24) 156 5250 (21062)

P5 PQ5 16 16 25 (5/20) 11350 (0/11350) 31 657 (687)

P5S PQ5 76 76 95 (75/20) 11424 (74/11350) 47 17015 (16860)

Ontology A is to a large extent taxonomic. Both queries AQ4 and AQ5 pro- duce many non-redundant rewritings, after a long sp-resolution/unfolding phase.

In AQ4, ProgRes produces more queries (some redundant ones) and in AQ5 it performs more inferences than Requiem. This is due to the structured pattern followed in the computation of the rewritings, which as mentioned before may introduce more redundancies. This is more obvious in ontology S, whereProgRes produces a lot of redundant queries. This is however an extreme degenerate case:

SQ5 is a bad query, as half of its atoms are essentially redundant. Back to ontol- ogy A, we note that althoughProgResneeds slightly more time to compute the rewritings, it terminates faster. This is due to the progressive nature ofProgRes.

While the producer thread computes the query rewritings, the consumer thread executes the already available ones, so there is no significant idle time for the consumer thread. In contrast, in Requiem the rewritings are obtained at the end, all together. This demonstrates the significant benefit from progressively computing and evaluating the query rewritings. The situation is quite different in ontology P5S, in which the inference procedure consist mainly of a chain of sr-reduction/saturation steps. Here, the strategy of ProgResis much more effi- cient and scalable. It avoids a very large number of inference sequences that are guaranteed to give no new queries, by applying sr-resolution only on sink atoms.

As a result, it finishes almost instantly, while for query PQ5Requiemneeds sig- nificantly more time and inferences. It is important to note, that in this case the

(15)

performance of Requiemis not scalable in the size of the query, as it is easy to see by comparing the results for queries PQ4 and PQ5 (in PQ5 we search for all graph paths of length 5, while in PQ4 for paths of length 4). In contrast toRe- quiem, almost the entire execution time ofProgResis answer retrieval time. The synthetic nature of P5S allows us to comment also on the evolution of the simi- larity of the answers. In (the original) ontology P5, only sr-resolution/saturation steps are involved, hence we expectProgResandRequiemto compute the rewrit- ings in the same order (but not the same fast). In ontology P5S, however, for each rewriting produced at a sr-reduction step,ProgRescomputes immediately all its sp-resolutions, and so the progressive decrease of similarity is achieved.

Requiemcomputes first all the rewritings resulting from the saturation step and then all the unfoldings, hence no ranking of the answers is possible. Similar is the situation for query AQ4. In the case of AQ5, all the rewritings have the same graph structure, so the slight fluctuations in the similarity graph for Requiem are due to the non-ordered computation of the unfoldings. A steep decrease in the similarity measure occurs only when the structure of a rewriting w.r.t the original query changes.

6 Conclusions and future work

We have presented a systematic approach to the problem of stratifying over time the execution of CQA algorithms. We introduced the notions of progressive CQA algorithms (sequences of CQAs - its components), of strides (result sets of subsets of the components), of ordering of strides measuring the relevance of the answers with the user query and of sorted progressive CQAs ensuring a control- lable approximation of the correct answer set. We have also presented a practical algorithm for progressive CQA in DL-LiteRthat implements the above ideas and showed that it is possible to develop progressive algorithms that are efficient. Ac- tually, it has been found that in the presence of large queries and TBoxes that are not simple taxonomies of atomic concepts the proposed algorithm can be much more efficient than the similar non-progressive ones (overcoming a strong limitation of some DL-LiteRCQA systems). An obvious advantage of the specific approach is the ability to be responsive even in cases of huge databases under a predetermined approximation strategy. A second advantage is that in case of inconsistency and considering data as stronger than theory (in information retrieval this is reasonable), PCQAs ensure a decreased possibility of incorrect answers. Finally, PCQA have ideal structure for parallel processing. The main disadvantage is that it is difficult to reduce the redundancies, since the complete set of answers is not available before the output. The present work can be ex- tended in several directions. We could take advantage of the ideas presented in [10] and dramatically improved the performance of DL-LiteR CQAs. Also, we could develop PCQA algorithms and systems for more expressive DLs. Finally, we could try to stratify smaller strides in ProgRes avoiding wide replications by applying sophisticated redundancy checking.

(16)

Acknowledgements

This work has been funded by ECP 2008 DILI 518002 EUscreen (Best Practice Networks).

References

1. A. Poggi, D. Lembo, D. Calvanese, G. De Giacomo, M. Lenzerini, and R. Rosati.

Linking data to ontologies. J. on Data Semantics, pp. 133–173 (2008)

2. F. Baader, D. Calvanese, D. McGuinness, D. Nardi, and P. F. Patel-Schneider, edi- tors. The Description Logic Handbook: Theory, Implementation, and Applications.

Cambridge University Press (2007)

3. B. Motik et al (editors). OWL 2 web ontology language profiles. W3C Recommen- dation, 27 October (2009)

4. A. Artale, D. Calvanese, R. Kontchakov, and M. Zakharyaschev. The DL-Lite family and relations. Journal of Artificial Intelligence Research, pp. 36–69 (2009).

5. J. A. Robinson and A. Voronkov, editors. Handbook of Automated Reasoning (in 2 volumes). Elsevier and MIT Press (2001)

6. B. Glimm, I. Horrocks, C. Lutz, and U. Sattler. Conjunctive query answering for the description logic SHIQ. J. of Artificial Intelligence Research, 31:157–204 (2008) 7. H. Perez-Urbina, I. Horrocks, and B. Motik. Efficient query answering for OWL 2.

In: 8th International Semantic Web Conference (ISWC 2009), vol. 5823 of Lecture Notes in Computer Science, pp. 489–504. Springer (2009)

8. Y. Guo, Z. Pan, and J. Heflin. An Evaluation of Knowledge Base Systems for Large OWL Datasets. Third International Semantic Web Conference, Hiroshima, Japan, LNCS 3298, Spinger (c), pp. 274-288 (2004)

9. G. Stoilos, B. Cuenca Grau, and I. Horrocks. How Incomplete is your Semantic Web Reasoner? In: 20th Nat. Conf. on Artificial Intelligence (AAAI) (2010)

10. Riccardo Rosati and Alessandro Almatelli, Improving Query Answering over DL- Lite Ontologies. In: Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR 2010) (2010)

11. H. Bunke, Graph matching: theoretical foundations, algorithms, and applications.

In: Vision Interface 2000, pp. 82–88. Montreal/Canada (2000)

12. M. Dehmer, F. Emmert-Streib, A. Mehler and J. Kilian, Measuring the structural similarity of web-based documents: a novel approach, J. Comp. Intell., pp. 1-7 (2006) 13. J. Euzenat and P. Shvaiko, Ontology Matching, Springer-Verlag (2007)

14. I. Ilyas, G. Beskales, and M. Soliman, A Survey of Top-k Query Processing Tech- niques in Relational Database Systems, ACM Computing Surveys, Vol. 40, No. 4, Article 11 (2008)

15. M. Soliman and I. Ilyas, Top-k Query Processing in Uncertain Databases, ICDE, pp. 896–905 (2007)

16. T. Lukasiewicz and U. Straccia, Top-k retrieval in description logic programs under vagueness for the Semantic Web, In Proc. SUM-2007, pp. 16–30 (2007)

Références

Documents relatifs

In particular, we studied how people could learn to do two simple motor tasks using the same training tasks and feedback as those given to motor imagery (MI) BCI users.. More

Secondly, complexity theory problematics will be considered: while the nilpotency problem in the semiring we study is undecidable in general, it will appear that certain

Recalling resolutions WHA64.9 and WHA58.33 on sustainable health financing structures and universal coverage; EM/RC57/R.7 on strategic directions to improve health care financing;

1.6 Report annually to WHO on the progress in strengthening and maintaining national core public health capacities required under Articles 5.2 and 13.2 of the International Health

Recalling resolution EM/RC59/R.3 on health systems strengthening, which urged Member States to strengthen national health information systems and the discussions on the

3.6 Report annually to the Regional Committee on the progress of Member States in implementing the United Nations Political Declaration on Prevention and Control of

Recalling United Nations General Assembly resolution 66/2 on the Political Declaration of the High-level Meeting of the General Assembly on the Prevention and Control

Recalling that improving the health of mothers, adolescents and children is key to achievement of development goals, noting that maternal and child health is at the core of