• Aucun résultat trouvé

University of Padua at CLEF 2002: Experiments to Evaluate a Statistical Stemming Algorithm

N/A
N/A
Protected

Academic year: 2022

Partager "University of Padua at CLEF 2002: Experiments to Evaluate a Statistical Stemming Algorithm"

Copied!
8
0
0

Texte intégral

(1)

University of Padua at CLEF 2002:

Experiments to evaluate a statistical stemming algorithm

Michela Bacchin Nicola Ferro Massimo Melucci {michela.bacchin, nicola.ferro, massimo.melucci}@unipd.it

Department of Information Engineering – University of Padua Via Gradenigo 6/a – 35131 Padova, Italy

Abstract

In Information Retrieval (IR), stemming is used to reduce variant word forms to common root. The assumption is that if two words have the same root, then they represent the same concept. Hence stemming permits a IR system to match query and document terms which are related to a same meaning but which can appear in different morphological variants. In this paper we will report our participation in CLEF 2002 Italian monolingual task, whose aim was to evaluate a statistical stemming algorithm based on link analysis. Considering that a word is formed by a prefix (stem) and a suffix, the key idea is that the interlinked prefixes and suffixes form a community of substrings. Hence discovering these communities means searching for the best word splits which give the best word stems. The results show that stemming improves the IR effectiveness. They also show that effectiveness level of our algorithm, which does not incorporate any heuristics nor linguistic knowledge, is comparable to that of an algorithm based on a-priori linguistic knowledge. This is an encouraging result, particularly in a multi-lingual context.

1 Introduction

The main objective of our current research in multi-lingual Information Retrieval is to design, implement and evaluate a language-independent stemming algorithm based on statistical methods. We participated in CLEF 2002 Italian monolingual task with the aim of evaluating a statistical algorithm based on link- analysis methods. To accomplish this objectives we designed our experiments to investigate if stemming does not deteriorate or even enhances the effectiveness of retrieval of documents written in Italian by making use of both a linguistic stemmer and a statistical stemmer. Then we investigated if a statistical and language-independent stemming algorithm can perform as effectively as an algorithm developed on the basis of a-priori linguistic knowledge. This paper reports our participation in CLEF 2002 and it is organized as follows: Section 2 introduces the stemming process, section 3 reports the methodological approach we followed to build a new statistical and language-independent stemming algorithm, section 4 describes our official runs and the results we obtained, and section 5 reports some conclusions and future work.

2 Stemming

Stemming is used to reduce variant word forms to common root. The assumption is that if two words have the same root, then they represent the same concept. Hence stemming permits a IR system to match query and document terms which are related to a same meaning but which can appear in different morphological variants.

(2)

The effectiveness of stemming is a debated issue, and there are different results and conclusions. If effectiveness is measured by the traditional precision and recall measures, it seems that for a language with a relatively simple morphology, like English, stemming influences the overall performance little [6]. In contrast, stemming can significantly increase the retrieval effectiveness [13] and can also increase precision for short queries, [8] for languages with a more complex morphology, like the romance languages. Finally, as the system performance must reflect user’s expectations it has to be considered that the use of a stemmer is intuitive to many users [6], who can express the query to the system using a specific word without keeping in mind that only a variant of this word can appear in a relevant document. Hence, stemming can be viewed also as a sort of feature related to the user-interaction interface of an IR service.

To design a stemming algorithm, it is possible to follow a linguistic approach, using prior knowledge of the morphology of the specific language, or a statistical approach using some methods based on statistical principles to infer from the corpus of documents the word formation rules in the language studied. The former implies manual labor which has to be done by experts in linguistics – as matter of the fact, it is necessary to formalize the word formation rules, the latter being hard work, especially for those languages whose morphology is complex. Stemming algorithms based on statistical methods ensure no costs for inserting new languages on the system, and this is an advantage that becomes crucial especially for multilingual IR systems.

3 Methodological Approach

We will consider a special case of stemming, which belongs to the category known as affix removal stemming [3]. In particular our approach stays on a suffix stripping paradigm which is adopted by most stemmers currently in use by IR, like those reported in [9, 12, 15]. This stemming process splits each word into two parts, prefix and suffix, and considers the stem as the substring corresponding to the obtained prefix. Let us consider a finite collection of unique wordsW ={w1, ..., wN}and a wordw∈W of length

|w|, thenwcan be written as w=xy where xis a prefix and y is a suffix. If we split each word winto all the |w| −1 possible pairs of substrings, we build a collection of substrings, and each substring may be either a prefix, a suffix or both of at least an elementw∈W. LetX be the set of the prefixes of the collection andS⊆X be the set of the stems. We are interested in detecting the prefixxthat is the most probable stem for the observed word w. Hence, we have to determine the prefixx such as:

x = arg max

x P r(x∈S|w∈W) (1)

= arg max

x

P r(w∈W |x∈S)P r(x∈S)

P r(w∈W) (2)

where (2) is obtained applying the Bayes’ theorem which lets us swap the order of dependence between events. We can ignore the denominator, which is the same for all splits of w. P r(w ∈ W | x ∈ S) is the probability of observing w given that the stem xhas been observed. A reasonable estimation of that probability would be the reciprocal of the number of words beginning by that stem if the stems were known. However note that the stems are unknown – indeed stem detection is the target of this method – and the number of words beginning by a stem cannot be computed. Therefore we estimated that probability by the reciprocal of the number of words beginning by that prefix. As regardsP r(x∈S) we estimated this probability using an algorithm that discloses the mutual relationship between stems and derivations in forming the words of the collection.

The rationale of using mutual reinforcement is based on the idea that stems extracted from W are those substrings that:

– are very frequent, and

– form words together with very frequent suffixes.

This means that very frequent prefixes are candidate to be stems, but they are discarded if they are not followed by very frequent suffixes; for example, all initials are very frequent prefixes but they are

(3)

unlikely stems because the corresponding suffixes are rather rare, if not unique – the same holds for suffixes corresponding to ending vowels or consonants. Thus, there are prefixes being less frequent than initials, but followed by frequent suffixes, yet less frequent than ending characters: these suffixes and prefixes correspond to candidate correct word splits and we label them as “good”. The key idea is that interlinked good prefixes and suffixes form a community of substrings whose links correspond to words, i.e. to splits. Discovering these communities is like searching for the best splits.

To compute the best split, we used the quite well-known algorithm called HITS reported in [7] and often discussed in many research papers as a paradigmatic algorithm for Web page retrieval. It considers a mutually reinforcing relationship among good authorities and good hubs, where an authority is a web page pointed to by many hubs and a hub is a web page which points to many authorities. The parallel with our context will be clear when we associate the concept of a hub to a prefix and that of authority to a suffix. The method belongs to the larger class of approaches based on frequencies of substrings to decide the goodness of prefixes and suffixes, often used in statistical morphological analysis [11, 4], and in the pioneer work [5]. The contribution of this paper is the use of mutual reinforcement notion applied to prefix frequencies and suffix frequencies, to compute the best word splits which give the best word stems as explained in the following.

Using a graphical notation, the set of prefixes and suffixes can be written as a graphgsuch that nodes are substrings and an edge occurs between nodes x, y ifw=xyis a word in W. By definition ofg, no vertex is isolated. As an example, let us consider the following toy set of words: W={aba, abb, baa};

splitting these into all the possible prefixes and suffixes produces a graph, reported in Figure 3a.

b ab

bb a

aa

ba

substring prefix score suffix score

a 0.250 0.333

aa 0.000 0.167

ab 0.375 0.000

b 0.125 0.167

ba 0.250 0.167

bb 0.000 0.167

(a) (b)

Figure 1: (a) The graph obtained from W. (b) The prefix and suffix scores fromW

If a directed edge exists betweenxandy, the mutual reinforcement notion can be stated as follows:

good prefixes point to good suffixes, and good suffixes are pointed to by good prefixes.

Let us defineP(y) ={x:∃w, w =xy} and S(x) ={y :∃w, w =xy} that are, respectively, the set of all prefixes of a given suffix y and the set of all suffixes of a given prefix x. If px and sx indicate, respectively, the prefix score and the suffix score, the criteria can be expressed as:

px= X

y∈S(x)

sy sy= X

x∈P(y)

px (3)

under the assumption that scores are expressed as sums of scores and splits are equally weighed.

The method of mutual reinforcement has been formalized through the HITS iterative algorithm. Here we map HITS in our study context, as follows:

Compute suffix scores and prefix scores fromW

V: the set of substrings extracted from all the words inW

(4)

P(y): the set of all prefixes of a given suffixy S(x): the set of all suffixes of a given prefixx N: the number of all substrings inV

n: the number of iterations 1: the vector(1, ...,1)∈ R|V| 0: the vector(0, ...,0)∈ R|V| s(k): suffix score vector at stepk p(k): prefix score vector at stepk s(0)=1

p(0)=1

for each iterationk= 1, ..., n s(k)=0

p(k)=0 for eachy∈V

s(k)y =P

x∈P(y)p(k−1)x ; for eachx∈V

p(k)x =P

y∈S(x)s(k)y ; normalizep(k)ands(k)so that1 =P

xp(k)x =P

ys(k)y

end.

Using the matrix notation, the graphg can be described with a|V| × |V|matrixMsuch that mij =

(1 if prefixiand suffixj form a word 0 otherwise

As explained in [7], the algorithm computes two matrices: A=MTMandB=MMT, where the generic elementaij ofAis the number of vertices that are pointed by bothiandj, whereas the generic element bij of B is the number of vertices that point to both i and j. The n-step iteration of the algorithm corresponds to computingAn andBn. In the same paper, it has been argued thats= [sy] andp= [px] converge to the eigenvectors ofAandB, respectively. The scores computed for the toy set of words are reported in Table 3b.

As explained previously, we argue that the probability thatx is a stem, can be estimated with the prefix scorepxjust calculated. The underlying assumption is that the scores can be seen as probabilities, and, in effect, it has been proved in a recent work that HITS scores can be considered as a stationary distribution of a random walk [1]. In particular, the authors proved the existence of a Markov chain, which has the stationary distribution equal to the hub vector after the nth iteration of the Kleinberg’s algorithm, which is, in our context, the prefix score vector p= [px]. The generic element qij(n) of the transition matrix referred to the chain is the probability that, starting from i, one reaches j after n

“bouncing” to one of the suffixes which begins to be associated withiandj. To interpret the result in a linguistic framework,pican be seen as the probability thatiis judged as a stem by the same community of substrings (suffixes) being resulted by the process of splitting words of a language. In Table 1, all the possible splits for all the words are reported and measured using the estimated probability.

4 Experiments

The aim of CLEF 2002 experiments is to compare the retrieval effectiveness of the link analysis-based algorithm illustrated in the previous Section with that of an algorithm based on a-priori linguistic knowl- edge, because the hypothesis is that a language-independent algorithm, such as the one we propose, might effectively replace one developed on the basis of manually coded derivational rules. Before comparing the algorithms, we assessed the impact of both stemming algorithms by comparing their effectiveness with that reached without any stemmer. In fact, we did want to test if the system performance is not

(5)

word prefix suffix words beginning words ending probability choice by prefix by suffix

baa b aa 1 1 0.1250

baa ba a 1 2 0.2500 *

aba a ba 2 1 0.1250

aba ab a 2 2 0.1875 *

abb a bb 2 1 0.1250

abb ab b 2 1 0.1875 *

Table 1: The candidate splits fromW={aba,baa,abb}.

significantly hurt by the application of stemming, as hypothesized in [6]. If stemming did on the contrary improve effectiveness, and the effectiveness of the tested algorithms were comparable, the link-based al- gorithm would ensure low costs for extending it also to other languages, which is crucial in multi-lingual settings. To evaluate stemming, we decided to compare the performance of an IR system changing only the stemming algorithms for different runs, all other things being equal.

4.1 Experimental Prototype System

For indexing and retrieval, we used an experimental IR system, called IRON, which has been realized by our research group with the aim of having a robust tool for carrying out IR experiments. IRON is built on top of the Lucene 1.2 RC4 library, which is an open-source library for IR written in Java and publicly available in [10]. The system implements the vector space model [16], and a (tf·idf)–based weighting scheme [17]. The stop-list which was used consists of 409 Italian frequent words and it is publicly available in [18].

As regards the realization of the statistical stemming algorithm, we built a suite of tools, called Stemming Program for Language Independent Tasks (SPLIT), which implements the link-based algorithm and chooses the best stem, according to the probabilistic criterion described in Section 3. From the vocabulary of the Italian CLEF sub-collection, SPLIT spawns a 2,277,297-node and 1,215,326-edge graph, which is processed to compute prefix and suffix scores – SPLIT took 2.5 hours for 100 iterations on a personal computer equipped with Linux, an 800 MHz Intel CPU and 256MB RAM.

4.2 Runs

We tested four different stemming algorithms:

1. NoStem: No stemming algorithm was applied.

2. Porter-like: We used the stemming algorithm for the Italian language, which is freely available in the Snowball Web Site [14] edited by M. Porter. Besides being publicly available for research purposes, we have chosen this algorithm because it uses a kind of a-priori knowledge of the Italian language, so comparing our SPLIT algorithm with this particular “linguistic” algorithm could give some information about the possibility of estimating linguistic knowledge with statistically inferred knowledge.

3. SPLIT: We implemented our first version of the stemming algorithm based on a link-analysis with 100 iterations.

4. SPLIT-L3: We included in our stemming algorithm a little ignition of linguistic knowledge, in- serting a heuristic rule which forces the length of the stem to be at least 3.

(6)

4.3 A Global Evaluation

We carried out a macro evaluation by averaging the results over all the queries of the test collection.

Table 2 shows a summary of the figures related to the macro analysis of the stemming algorithm for 2002 topics, while table 3 reports 2001 data.

Run ID Algorithm N. Relevant Retrieved Av. Precision R-Precision

PDDN NoStem 887 0.3193 0.3367

PDDP Porter-like 914 0.3419 0.3579

PDDS2PL SPLIT 913 0.3173 0.3310

PDDS2PL3 SPLIT-L3 911 0.3200 0.3254

Table 2: Macro comparison among runs for 2002 topics.

Algorithm N. Relevant Retrieved Av. Precision R-Precision

NoStem 1093 0.3387 0.3437

Porter-like 1169 0.3753 0.3619

SPLIT 1143 0.3519 0.3594

SPLIT-L3 1149 0.3589 0.3668

Table 3: Macro comparison among runs for 2001 topics.

Note that both for 2001 and 2002 topics, all the considered stemming algorithms improve recall, since the number of retrieved relevant documents is larger than the number of retrieved relevant documents observed in the case of retrieval without any stemmer; the increase has been observed for all the stem- ming algorithms. As regards the precision, while for 2002 topics, stemming does not hurt the overall performances of the system, for 2001 data, stemming even increases the precision, and then the overall performance is higher thanks to the application of stemming.

Figure 2 shows the Averaged Recall-Precision curve at different levels of recall and Figure 3 illustrates the Recall-Precision curve at given document cutoff values, both for 2002 and 2001 topic sets. As regards

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

0%

10%

20%

30%

40%

50%

60%

70%

80%

Interpolated Recall

Average Precision

Topic 2002 − Interpolated recall vs average precision

NoStem Porter−like SPLIT SPLIT−L3

(a) 2002 Topics

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

0%

10%

20%

30%

40%

50%

60%

70%

80%

Interpolated Recall

Average Precision

Topic 2001 − Interpolated recall vs average precision

NoStem Porter−like SPLIT SPLIT−L3

(b) 2001 Topics

Figure 2: Average Precision Curves for four stemming algorithms.

the use of link-based stemming algorithms, it is worth noting that SPLIT can attain levels of effectiveness being comparable to one based on linguistic knowledge. This is surprising if you know that SPLIT was

(7)

5 docs 10 docs 15 docs 20 docs 30 docs 100 docs 200 docs 500 docs 1000 docs 0%

10%

20%

30%

40%

50%

Retrieved documents

Precision

Topic 2002 − Retrieved documents vs precision

NoStem Porter−like SPLIT SPLIT−L3

(a) 2002 Topics

5 docs 10 docs 15 docs 20 docs 30 docs 100 docs 200 docs 500 docs 1000 docs 0%

10%

20%

30%

40%

50%

Retrieved documents

Precision

Topic 2001 − Retrieved documents vs precision

NoStem Porter−like SPLIT SPLIT−L3

(b) 2001 Topics

Figure 3: R-Precision Curves for four stemming algorithms.

built without any sophisticated extension to HITS and that neither heuristics nor linguistic knowledge was used to improve effectiveness. It should also be considered as a good result, if you consider that it has also been obtained for the Italian language, which is morphologically more complex than English.

5 Conclusions and Future Work

The objective of this research was to investigate a stemming algorithm based on link analysis procedures.

The idea has been that prefixes and suffixes, that are stems and derivations, form communities once extracted from words. We tested this hypothesis by comparing the retrieval effectiveness of SPLIT, a link analysis based algorithm derived from HITS, with a linguistic knowledge based algorithm, on a quite morphologically complex language as it is the Italian language.

The results are encouraging because effectiveness level of SPLIT is comparable to that developed by Porter. The results should be considered even better since SPLIT does not incorporate any heuristics nor linguistic knowledge. Moreover, stemming, and then SPLIT, showed to improve effectiveness with respects to not using any stemmer.

We are carrying out further analysis at a micro level to understand the conditions under which SPLIT performs better or worse compared to other algorithms. Further work is in progress to improve the probabilistic decision criterion and to insert linguistic knowledge directly in the link-based model by thus weighting links among prefixes and suffixes with a probabilistic function which could capture available information on the language, such as, for example, the minimum length of a stem. Finally, further experimental work is in progress with other languages.

References

[1] A. Borodin, G.O. Roberts, J.S. Rosenthal, and P. Tsaparas. Finding authorities and hubs from link structures on the World Wide Web. InProceedings of the World Wide Web Conference, pages 415–429, Hong Kong, 2001. ACM Press.

[2] C. Cleverdon. The Cranfield Tests on Index Language Devices. In K. Sparck Jones and P. Willett (Eds.). Readings in Information Retrieval, pages 47-59, Morgan Kaufmann, 1997.

[3] W.B. Frakes and R. Baeza-Yates. Information Retrieval: data structures and algorithms. Prentice Hall, 1992.

(8)

[4] J. Goldsmith. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):154–198, 2001.

[5] M. Hafer and S. Weiss. Word segmentation by letter successor varieties. Information Storage and Retrieval, 10:371–385, 1994.

[6] D. Harman. How effective is suffixing? Journal of the American Society for Information Science, 42(1):7–15, 1991.

[7] J. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604–

632, September 1999.

[8] R. Krovetz. Viewing Morphology as an Inference Process,. InProceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR), 1993.

[9] J. Lovins. Development of a stemming algorithm. Mechanical Translation and Computational Lin- guistics, 11:22–31, 1968.

[10] The Jakarta Project. Lucene. http://jakarta.apache.org/lucene/docs/index.html, 2002.

[11] C.D. Manning and H. Sch¨utze. Foundations of statistical natural language processing. The MIT Press, 1999.

[12] C.D. Paice. Another Stemmer. InACM SIGIR Forum, 24, 56–61, 1990.

[13] M. Popovic and P. Willett. The effectiveness of stemming for natural-language access to sloven textual data. Journal of the American Society for Information Science, 43(5):383–390, 1992.

[14] M. Porter. Snowball: A language for stemming algorithms. http://snowball.sourceforge.net, 2001.

[15] M.F. Porter. An algorithm for suffix stripping. Program, 14(3):130–137, 1980.

[16] G. Salton and M. McGill. Introduction to modern Information Retrieval. McGraw-Hill, New York, NY, 1983.

[17] G. Salton and C. Buckley. Term weighting approaches in automatic text retrieval. Information Processing & Management, 24(5):513–523, 1988.

[18] Institut interfacultaire d’informatique. CLEF and Multilingual information retrieval. University of Neuchatel. http://www.unine.ch/info/clef/, 2002.

[19] C. Buckley. Trec eval. ftp://ftp.cs.cornell.edu/pub/smart/, 2002.

Références

Documents relatifs

One of the interesting techniques for leader election is used in the WBS (Wait Before Starting) algorithm, in which each node in the network will wait for a time corresponding to

Ce second segment moyen est plus allongé que le précédent; dans sa région terminale il porte en position dorsale une cupule de grande dimension.. Un sillon

Extensive experimentations have been conducted for comparing our algorithm on various families of weighted graphs against the most efficient practical algorithms for the SSSP

Given an arbitrary multigraph M having N nodes, the paper introduces a totally new kind of SMER-based distributed sieve algorithm that generates all primes from a given interval

The distance between the two branches, which model respectively two sets (A and B) of structural segments, is the minimum of the edit distance between each segment of A and each

Sagayama, “A music structure inference algorithm based on symbolic data analysis” Music Information Retrieval Evaluation eXchange - ISMIR, October

Moving to the parallel implementation on the GPU, developed according to Algorithm 1, Table 2 shows that the bottleneck on step 1 is completely mitigated, once

In this paper, we have used EER ratio to identify sub-bands that contain discriminative information between genuine and replay speech. We have then proposed two approaches to