• Aucun résultat trouvé

Efficient Algorithms on the Family Associated to an Implicational System

N/A
N/A
Protected

Academic year: 2021

Partager "Efficient Algorithms on the Family Associated to an Implicational System"

Copied!
25
0
0

Texte intégral

(1)

HAL Id: hal-00959010

https://hal.inria.fr/hal-00959010

Submitted on 13 Mar 2014

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Efficient Algorithms on the Family Associated to an Implicational System

Karell Bertet, Mirabelle Nebut

To cite this version:

Karell Bertet, Mirabelle Nebut. Efficient Algorithms on the Family Associated to an Implicational System. Discrete Mathematics and Theoretical Computer Science, DMTCS, 2004, 6 (2), pp.315-338.

�hal-00959010�

(2)

Efficient Algorithms on the Moore Family Associated to an Implicational System

Karell Bertet

1

and Mirabelle Nebut

2

1L3I, Universit´e de La Rochelle,

Pˆole Sciences et Technologies, av Michel Cr´epeau, 17042 La Rochelle C´edex 1, France karell.bertet@univ-lr.fr

2LIFL, Universit´e des Sciences et Technologies de Lille, Bˆatiment M3, 59655 Villeneuve d’Ascq C´edex, France mirabelle.nebut@lifl.fr

received Jun 7, 2002, revised Jun 13, 2002, Jun 28, 2003, accepted Dec 18, 2003.

An implication system (IS)Σon a finite set S is a set of rules calledΣ-implications of the kind AΣB, with A,BS.

A subset XS satisfies AΣB when “AX implies BX ” holds, so ISs can be used to describe constraints on sets of elements, such as dependency or causality. ISs are formally closely linked to the well known notions of closure operators and Moore families. This paper focuses on their algorithmic aspects. A number of problems issued from an ISΣ(e.g. is it minimal, is a given implication entailed by the system) can be reduced to the computation of closures ϕΣ(X), whereϕΣis the closure operator associated toΣ. We propose a new approach to compute such closures, based on the characterization of the direct-optimal ISΣdo which has the following properties: 1. it is equivalent toΣ 2.

ϕΣdo(X)(thusϕΣ(X)) can be computed by a single scanning ofΣdo-implications 3. it is of minimal size with respect to ISs satisfying 1. and 2. We give algorithms that computeΣdo, and fromΣdo closuresϕΣ(X)and the Moore family associated toϕΣ.

Keywords: Moore family, implicational system, closure operator, algorithm, lattice.

1 Introduction

As recalled in [CM04], the basic mathematical notion of closure operator (an isotone, extensive and idempotent mapϕ) defined on a poset (P,≤) is fundamental in a number of fields linked to computer science, in particular when defined on the lattice(2S,⊆)of all subsets of a finite set S. In this case, closure operators are closely linked to the notion of Moore family, a familyF2Swhich contains S and is closed under intersection (see [CM04] for more details). The notions of closure operator and Moore family both involve the concept of logical or entail implication, used for instance in knowledge systems or relational data-bases (these fields handle systems of implications, called for example functional dependencies in relational data-bases [MR92, Mai83], and association rules in data-mining [PBTL99]). Hence the notion of Implicational System (IS for short) defined in [CM04], to which is dedicated this paper.

Formally an IS on S denoted byΣ2S×2Sis a set of rules calledΣ-implications of the kind AΣB, with A,BS. A subset XS satisfies an implication AΣB when “AX implies BX ”. So ISs

1365–8050 c2004 Discrete Mathematics and Theoretical Computer Science (DMTCS), Nancy, France

(3)

can be used to easily describe constraints between sets of elements, such as dependency or causality. Let us give here an intuitive example which will also be used in the core of the paper (see Ex. 1 in Sect. 3).

Assume that S={a,b,c,d,e}is a set of events. The ISΣ={ab,acd,ea}can be interpreted as “if a resp. e occurs then so does b resp. a, and if a and c occur then so does d”.

Given such a system, several types of questions arise. A common problem is to find a minimum

“full” system of implications, from which all implications between elements can be obtained. Another very natural issue is for instance the question “is it possible that a and e occur and not c ?”. One can answer using either the implicational Moore family associated toΣ(FΣcontains all subsets X S that satisfy eachΣ-implication) or the closure operator associated toFΣ FΣ maps a subset X S on the least element FFΣ s.t. X F). In our example the answer is “yes” because abeFΣ and c6∈ae, or because c6∈ϕFΣ(ae) =abe. Answering questions about a system using the closure ϕFΣ(X)has a great advantage: it avoids the construction of the whole Moore family (which contains 14 elements in our example). MoreoverϕFΣ can also be used to compute efficientlyFΣ, whose direct definition-based generation relies upon an exponential enumeration of all subsets of S. Note that data-mining has to deal with a reverse problem adressing the efficient generation of association rules from a family of closures called itemsets [PBTL99].

The properties of implicational Moore families and ISs have been studied in [GD86, Wil94, Wil95, CM04] from a theoretical point of view. This paper focuses on algorithmic issues. Following the intuition given before, it is based on the efficient computation ofϕFΣ(X). As detailed in the core of the paper, this computation was addressed in several ways in [Mai83, MR92, Wil95]:ϕFΣ(X)is obtained by several enumerations of the implications ofΣ. For instance in the previous example the computation ofϕFΣ(ae) = abe is performedby scanning once theΣ-implications (first and third implications) but the computation ofϕFΣ(ce) =abcde is performed by scanning them twice: The first enumeration brings aceϕFΣ(ce) (third implication) and the second one brings bdϕFΣ(ce)(first and second implications).

The new approach we propose is based on two fundamental algorithmic observations: 1. the compu- tation ofϕFΣ(X)is more efficient when Σis optimal, where optimal means “of minimal size”; 2. the enumeration number ofΣ-implications needed to computeϕFΣ(X)can be reduced to 1 whenΣis direct.

Let us illustrate it on our example. The ISΣd =Σ∪ {eb,ced}is direct and equivalent toΣ(it is easy to check thatϕFΣ

d(ce)can be now computed by a single scanning ofΣd-implications). It is not optimal. The ISΣo ={eab,acd,ab,ced}is similarly equivalent toΣand direct, but also direct-optimal in the sense that there exists no equivalent direct IS of smaller size (thoughΣo6⊆Σd). Our approach also consists in computingϕFΣ(X)(henceFΣ) by exploiting the directness and optimality prop- erties: We define the direct-optimal ISΣdo generated fromΣ. FirstΣis completed by some implications into the direct ISΣd, thenΣd is modified into the optimal ISΣdo (Σ,Σd andΣdo being equivalent).

The paper is organized as follows: Sect. 2 gives notations and standard definitions. Section 3 first gives some preliminaries on the computation ofϕFΣ(X)(Sect. 3.1) then defines the notion of direct IS and characterizes the direct ISΣd generated fromΣ(Sect. 3.2). In the same way, Sect. 3.3 defines the notion of direct-optimal IS and characterizes the direct-optimal ISΣo generated from a direct ISΣ. By combination of these two definitions, we naturally obtain the direct-optimal ISΣdo generated from a given ISΣ(Sect. 3.4).

Section 4 deals with algorithmic aspects of the above result. We first describe an efficient data structure

We abuse notations and write ac for{a,c}.

At this stage the reader should admit the following recipe: InitializeϕFΣ(X)with X , then iteratively scanΣ-implications until stabilization doing: If ABΣand AϕFΣ(X)then add B toϕFΣ(X).

(4)

introduced in [Gan84, HN96, NR99] and called lexicographic tree, traditionally used to represent families and extended here to represent ISs (Sect. 4.1). We then give an algorithm to compute the closureϕFΣ(X) from a direct-optimal IS (Sect. 4.2), and an algorithm to compute the direct-optimal ISΣdo generated from some ISΣ, whereΣandΣdo are represented by a lexicographic tree. We finally propose an algorithm to generateFΣ(Sect. 4.3), based on properties of the lattice(FΣ,⊆).

2 Definitions and Notations

Let us consider a finite set of elements S. A familyF on S is a set of subsets of S:F 2S. A Moore family Fon S is a family stable by intersection and which contains S: SFand F1,F2Fimplies F1∩F2F. The poset(F,⊆)is a lattice with, for each F1,F2F, F1F2=F1F2and F1F2=T{FF|F1F2F}

(recall that a lattice is an order relation (i.e. reflexive, antisymmetric and transitive) over a set of elements such that any pair x,y of elements has a join (i.e. a least upper bound) denoted by xy, and a meet (i.e. a greatest lower bound) denoted by xy).

Let X,Xbe subsets of S. A closure operatorϕon S is a map on 2Swhich is isotone (XXimplies ϕ(X)ϕ(X)), extensive (Xϕ(X)) and idempotent (ϕ2(X) =ϕ(X)). ϕ(X)is called the closure of X byϕ. X is said to be closed byϕwhenever it is a fixed point forϕ, i.e.ϕ(X) =X .

The set of all Moore families and the set of all closure operators on S are in a one-to-one correspon- dence. The Moore familyFϕassociated to the closure operatorϕis the set of all closed elements ofϕ:

Fϕ={FS|F=ϕ(F)} (1) The closure operatorϕFassociated to the Moore familyFis such that, for any XS,ϕF(X)is the least element FFthat contains X :

ϕF(X) =\{FF|XF} (2) In particularϕF(/0) =F. Note thatϕF(X)Fbecause Moore families are closed by intersection. More- over for all F1,F2F, F1F2=ϕF(F1F2)and F1F2=ϕF(F1F2) =F1F2.

Let A,B be subsets of S. An Implicational System (IS for short) Σon S is a binary relation on 2S: Σ2S×2S. A couple(A,B)Σis called aΣ-implication whose premise is A and conclusion is B. It is written AΣB or AB (meaning “A implies B”). The familyFΣon S associated toΣis:

FΣ={XS|AXBX for each ABΣ} (3) i.e. it is the set of sets XS such that “X contains A implies X contains B”.FΣis clearly a Moore family called the implicational Moore family on S associated toΣ. Several ISs can describe the same Moore family: ΣandΣon S are equivalent ifFΣ=FΣ. The problem is to find the smallest ones, according to various criteria [Wil94]. Σis non-redundant ifΣ\ {XY}is not equivalent toΣ, for all XY inΣ. It is minimum if|Σ| ≤ |Σ|for all ISΣequivalent toΣ.Σis optimal if s(Σ)s(Σ)for all ISΣequivalent toΣ, where s(Σ)is the size ofΣdefined by:

s(Σ) =

A→B∈Σ(|A|+|B|) (4)

Other definitions not recalled here can be found in the survey of Caspard and Monjardet [CM04].

In the following, S is endowed with a total order<α or simply α. A subset X ={x1,x2, . . . ,xn} is viewed as the word xj1xj2. . .xjnsorted according toα: xj1<αxj2 <α· · ·<αxjn.Σis an IS on S,FΣorF is the Moore family associated toΣ, andϕFΣ orϕΣor simplyϕis the induced closure operator.

(5)

3 Characterization of ϕ

Σ

from Σ

As explained in introduction, a number of problems related to an ISΣcan be answered by computing closures of the kindϕΣ(X), for some X S. Section 3.1 presents important notions used further and in- troduces our method: The idea is to perform the computation ofϕΣ(X)not onΣbut on another equivalent IS which makes the computation more efficient. Section 3.2 defines such convenient and equivalent IS, called direct. Section 3.3 characterizes the smallest equivalent direct IS inferred from a direct one, called direct-optimal. Finally Sect. 3.4 characterizes the direct-optimal IS equivalent to some ISΣ.

3.1 Preliminaries

A direct and naive computation ofϕΣ(or simplyϕ) follows from equations (2) and (3):

ϕ(X) =T{XS|XXand

AXimplies BXfor each AΣB} (5) It requires an enumeration of all subsets Xsuch that XXS, plus a test on the premise and conclusion of each implication. Moreover these enumerations must be done for each particular X under consideration.

[Wil94, Wil95] propose a definition ofϕ(X)which induces a more efficient computation:

ϕ(X) =XΣΣ

...Σ

(6) where

XΣ=X[{B|AX and AΣB} (7)

According to [Wil95]ϕ(X)is in this way obtained in O(|S|2|Σ|)by iteratively scanningΣ-implications:

ϕ(X)is initialized with X then increased with B for each implication AΣB such thatϕ(X)contains A. The computation cost depends on the number of iterations, in any case bounded by|S|. In order to practically limit this number (keeping the same complexity), [Wil95] tunes algorithms using additional data structures.

It is worth noting that for some particular ISs the computation ofϕrequires only one iteration. Such an IS is called direct (one can also find iteration-free in [Wil94]):

Definition 1 An ISΣis direct if, for all XS:

ϕ(X) =XΣ=X[{B|AX and AΣB} (8) Instead of tuning algorithms applied to some ISΣ, a possible approach is to infer fromΣan equiv- alent and direct IS Σ. Once it is done, each closureϕ(X)can be computed by simply enumerating Σ-implications. As an illustration, let us consider full ISs, that are a classical type of direct ISs.

According to [CM04] (Def. 49 p. 20), a full IS is a preorder (a reflexive and transitive relation) that contains the preorderon 2S×2Sand is∪-stable, that is it verifies the property:

for all A,B,C,DS,AB and CD imply A∪CBD As stated by Prop. 1, a full IS is direct.

(6)

Proposition 1 (Corollary 53 in [CM04]) ForΣa full IS,

ϕ(X) =[{BS|XΣB}=XΣ

Starting from the notion of full IS, and given some ISΣ, we define the full IS Σf inferred fromΣ, equivalent to Σ(Prop. 2), and direct (Prop. 1): it contains all Σ-implications, all implications due to inclusions in 2S×2S, and all implications generated byΣ-implications and inclusions.

Definition 2 The full ISΣf inferred fromΣis defined as the smallest§ISs.t.:

1. ΣΣf and

2. Σf verifies the three following properties: For all A,B,C,DS, P1 (inclusion axiom): BA implies AΣf B

P2 (transitivity axiom): AΣf B and BΣf C implies AΣf C P3 (union axiom): AΣf B and CΣf D implies A∪CΣf BD Proposition 2 ΣandΣf are equivalent.

For completeness, we give the proof of this simple result.

Proof: Let us prove thatFΣ=FΣf.

⊇. Immediate sinceΣΣf.

⊆. Consider FFΣ. It is easy to check by induction that F satisfies “AF implies BF” for any

AΣf B induced by P1, P2and P3.

UsingΣf, one can compute a closureϕΣ(X)in only one iteration. Nevertheless note that the directness of Σf is due to the fact that any subset AS appears as a premise of a Σf-implication: it makes the computation ofΣf exponential thus impracticable. The idea is then to look for smaller ISs, not necessarily full, but still direct and equivalent toΣ(andΣf). The smallest such one is called direct-optimal.

Definition 3 An ISΣis direct-optimal if it is direct, and if s(Σ)s(Σ)for any direct ISΣequivalent toΣ.

Our approach can be summarized as follows. Given some ISΣ:

We start from the three axioms that describe a full IS (cf. Def. 2) to define in Sect. 3.2 the direct IS Σd inferred fromΣ, whose directness is stated by Th. 1;

Consider Σis direct but perhaps not optimal: In this case someΣ-implications can be removed or simplified, while preserving the directness and semantics ofΣ. In Sect. 3.3 we first formally characterize direct-optimal ISs (Th. 2) then, given a direct ISΣ, we define the direct-optimal ISΣo

inferred fromΣ.

By combination of these two results, we obtain the definition of the direct-optimal ISΣdo inferred from some ISΣ. Moreover, we state that equivalent ISs define an unique direct-optimal IS (Corol- lary 1). ClosuresϕΣ(X)can then be computed by only one enumeration ofΣdo-implications, at a minimal cost.

§”Smallest” for the preorder⊆.

“Minimal” in the sense that using any other equivalent direct IS would be less efficient; Nevertheless in the cases where few closures are needed, or where a small non-direct IS is considered, it may be more efficient to iterateΣ-enumerations instead of computingΣdthenΣdo.

(7)

3.2 Σd: a Direct IS Generated from an ISΣ

In this section we define an IS smaller than Σf, but still direct and equivalent toΣ. To do so, let us consider again the three axioms that characterizeΣf (Def. 2), and let us explain whatΣf-implications can be removed without altering the directness and semantics of the IS, or dually what implications must necessarily be added toΣ. We consider the computation ofϕ(X)indicated by (6), for XS.

Given a pair of implications(I1,I2)present in the IS under construction, the principle is to “summarize”

via a third implication the result of theϕ(X)iterative computation process applied to(I1,I2). Axioms P2

and P3do apply this principle. Nevertheless the inferred implications (included these inferred by P1are sometimes clearly redundant with properties particular to the closure operatorϕ. It is the case when no iterative process is needed, because X contains both the implications premises:

1. Assume AX . The implication AΣf B stated by P1is redundant: it causes the explicit enrichment ofϕ(X)with B while according to Eq. (7) (and due to theϕextensiveness) we haveϕ(X)X , and XAB.

2. Assume A,BX . The implication AΣf C stated by P2is redundant with BΣf C, which already states the enrichment ofϕ(X)with C.

3. Assume A,CX . Similarly the implication A∪CΣf B∪D stated by P3is redundant with AΣf B and CΣf D.

When an iterative process is required to compute ϕ(X), implications inferred by a combination of the three axioms are necessary. For example let us consider the following ISΣ:

{acΣd , eΣa}

Assume X =ce. The computation ofϕ(X) =acde throughΣ requires an iterative process: The fact dϕ(X)is known from the first implication only when the intermediateϕ(X)has been enriched with a (second implication). To be direct, the IS must contain the implication:

ceΣd obtained by applying successively:

P1to infer the implication cc;

P3applied to cc and eΣa to infer ceac;

P2applied to ceac and acΣd to infer ceΣd.

Nevertheless implications cc and ceac are redundant with others, as explained below. To avoid this redundancy, let us consider the pair

{AΣB , CΣD} (9)

In the case where the computation ofϕ(X)requires an iteration: AX but C6⊆X . Because AX , the first implication adds B toϕ(X). Now if CXB, the second implication adds D toϕ(X). Since AX and CX∪B is equivalent to A(C\B)X , we can summarize this reasoning by the implication (10):

A(C\B)ΣD (10)

(8)

In the previous example, ced is indeed obtained from the pair{eΣa,acΣd}.

Note that the implication (10) is redundant with the one CΣD when BC= /0, since it yields A∪CD. This case does not happen here due to the condition C6⊆X (C6⊆X and CXB imply CB6=/0): We enforce it by imposing BC6=/0as the application condition of the rule.

The rule that infers implication (10) from implications (9) (called overlap axiom in Def. 4) encompasses the combination of axioms P2and P3, but also P1: The goal of P1is mainly to make appear any subset AS as a premise of aΣf-implication, in order to computeϕ(X)by Prop. 1. Since we computeϕ(X)by Equations (6) and (7) instead, we can drop P1.

The definition of the direct IS inferred fromΣnow follows directly from what precedes:

Definition 4 The direct implicational systemΣd generated fromΣis defined as the smallest IS s.t.

1. ΣΣd and

2. Σd verifies the following property:

P4 (overlap axiom) : for all A,B,C,DS:

AΣd B, CΣd D and B∩C6=/0imply A(C\B)Σd D We now adapt Prop. 1 to characterizeϕfromΣd.

Theorem 1 ϕ(X) =XΣd =XS{B|AX and AΣd B}

Two lemmas are needed to prove this theorem. Lemma 1 states thatΣd Σf, therefore thatΣd is equiv- alent toΣsinceΣΣd andΣf is equivalent toΣ. Lemma 2 is the core of the proof: it states that Σd

contains all “significant”Σf-implications. By “significant” we mean an implication AΣf B s.t. A6⊆B, so that it can add B\A to someϕ(X)and is not trivially redundant like implications inferred by P1 in Def. 2.

Lemma 2 states that any suchΣf-implication AΣf B is imitated by a set ofΣd-implications, where a Σd-implication is associated to each yB\A.

Lemma 1 ΣdΣf

Proof: Let{XiYi}1≤i≤pbe the implications successively added toΣdin order to completeΣby appli- cation of P4. We defineΣ0=Σ,Σi=Σi−1∪ {XiYi}andΣp=Σd. The proof is by induction on i, with 0ip. The base case is obtained by definition and Def 2:Σ0=ΣΣf.

Inductive step: For i1, let us prove thatΣi−1Σf impliesΣiΣf, equivalently that XiYiΣf. Since XiYiis added toΣi−1by application of P4, there exist AΣi−1B and CΣi−1 D such that B∩C6=/0, Xi=A(C\B)and Yi=D. By induction hypothesis ABΣf and CDΣf. Then

From ABΣf (by hypothesis) and BB∩CΣf (by P1) we deduce from P2 that AB∩C Σf.

From AB∩CΣf and C\BC\BΣf (by P1) we deduce from P3 that A(C\B)(B C)(C\B) =CΣf.

From A(C\B)CΣf and CDΣf by hypothesis we deduce from P2 that A(C\B) DΣf.

Therefore XiYiΣf and the proof is achieved.

(9)

Lemma 2 For all XΣf Y and yY\X , there exists XΣdYsuch that XX and yY.

Proof: Let{XiYi}1≤i≤pbe the implications successively added toΣf in order to completeΣby appli- cation of P1, P2 or P3. We defineΣ0=Σ,Σi=Σi−1∪ {XiYi}andΣp=Σf. The proof is by induction on i, with 0ip.

Base case: SinceΣ0=ΣandΣΣd: for all XΣ0Y and yY\X , the implication X Σd Y verifies XX and yY .

Inductive step: For i1, assume the property is proved forΣi−1, i.e. for all XΣi−1Y , for all yY\X , there exists XΣdYsuch that XX and yY. We consider the implication XiYiand some element yYi\Xiand show that:

there exists XiΣdYis.t. XiXiand yYi (11) If YiXi then Yi\Xi= /0and (11) is trivially satisfied. Assume Yi6⊆Xi and let us consider that XiYi has been added to Σi−1 by the application of P2 or P3 (since applying P1 implies that YiXi, which contradicts the hypothesis). Let us consider successively the application of P3 and P2.

Case P3: There exist AΣi−1 B and CΣi−1 D such that Xi=AC and Yi=BD, moreover y (BD)\(AC). We may assume that yB, the case yD being dual. Then yB\A and, since ABΣi−1and by induction hypothesis: There exists AΣd Bsuch that AA and yB. Since AAA∪C=Xi, AΣd Bsatisfies (11).

Case P2: There exist AΣi−1B and BΣi−1C such that Xi=A, Yi=C and yC\A. Let us consider the two sub-cases yB and y6∈B.

yB implies yB\A, and since ABΣi−1: By induction hypothesis there exists AΣd B such that AA and yB. Since AA=Xi, AΣdBsatisfies (11).

y6∈B implies yC\B, and since BCΣi−1: By induction hypothesis there exists BΣd C such that BB and yC. If BA=Xithen BΣdCsatisfies (11). If B6⊆A, let us write

B\A={yk}1≤k≤q

Since BB: ykB\A, and since ABΣi−1: By induction hypothesis there exist q implications AkΣd Bksuch that AkA and ykBk. Therefore:

B\A [

1≤k≤q

Bkand BA [

1≤k≤q

Bk

Axiom P4is now used to build an implication whose premise is included into A and whose con- clusion is C, so it verifies (11) since Xi=A and yC. This implication is the last element of a sequence of q implications AkΣkCobtained by applying iteratively P4to implications AkΣdBk. – initialization: we define A1ΣdCas the result of P4applied to A1Σd B1and BΣd C

(note that y1B1Bso B1B6=/0and P4can be applied), so A1=A1B\B1. – induction: for 1<kq, we define AkΣdCas:

Références

Documents relatifs

Family work days with at least a long work day are on the contrary associated with the highest conjugal time: long work days are indeed generally quite

Also, we present polynomial time algorithms for the evolution of the instance level knowledge bases expressed in DL-Lite A,id , which the most expressive Description Logics of

Many cluster validity indexes have been proposed that evaluate each fuzzy c-partition and determines the optimal number c.. of clusters allowing to obtain the optimal partition of

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

It would seem logical to expect that as the critical turning point reaches the entrance to the blind (O) this antecedent segment of inhibition should have spread down-ward on path Z

Our experiments show that features selected with a linear kernel provide the best clas- sification performance in general, but if strong non-linearities are present in the

Then two distributed structure learning algorithm are illustrated: SEM- PRE (“distributed Structure lEarning by MaPREduce”) and LEAP MR (“LEArning Probabilistic description logics

На основе созданной ранее бинарной реляционной структуры данных, осуществляется отображение расширений дескриптивной логики ALC в реляционную модель