• Aucun résultat trouvé

UniversityofOxford ,5May2009 P.Senellart A.Mittal D.Muschick R.Gilleron M.Tommasi AutomaticWrapperInductionfromHidden-WebSourceswithDomainKnowledge

N/A
N/A
Protected

Academic year: 2022

Partager "UniversityofOxford ,5May2009 P.Senellart A.Mittal D.Muschick R.Gilleron M.Tommasi AutomaticWrapperInductionfromHidden-WebSourceswithDomainKnowledge"

Copied!
39
0
0

Texte intégral

(1)

Automatic Wrapper Induction from Hidden-Web Sources with Domain Knowledge

P. Senellart1;2 A. Mittal3 D. Muschick6 R. Gilleron4;5 M. Tommasi4;5

1 2 3

4 5 6

University of Oxford, 5 May 2009

(2)

The Hidden Web

Definition (Hidden Web, Deep Web, Invisible Web) All the content on the Web that is not directly accessible throughhyperlinks. In particular: HTML forms, Web services.

(3)

Sources of the Hidden Web

Example

Yellow Pagesand other directories;

Library catalogs;

Weather services;

US Census Bureau data;

etc.

(4)

Discovering Knowledge from the Deep Web

Content of the deep Web hidden to classical Web search engines (they just follow links)

But very valuable and high quality!

Even services allowing access through the surface Web (e.g., e-commerce) have more semantics when accessed from the deep Web

How tobenefit from this information?

How to do it automatically, in an unsupervisedway?

(5)

Extensional Approach

WWW discovery

siphoning

bootstrap Index

indexing

(6)

Notes on the Extensional Approach

Main issues:

Discovering services

Choosing appropriate data to submit forms Use of data found in result pages to bootstrap the siphoning process

Ensure good coverage of the database

Approach favored by Google, used in production Not always feasible (huge load on Web servers)

(7)

Intensional Approach

WWW discovery

probing

analyzing Form wrapped as

a Web service

query

(8)

Notes on the Intensional Approach

Moreambitious Main issues:

Discovering services

Understanding the structure and semantics of a form Understanding the structure and semantics of result pages Semantic analysis of the service as a whole

No significant load imposed on Web servers

(9)

General architecture

Syntactic

analysis Probing

Response analysispage WSDLgener-

ation WSDL+ Web

service

User

Domain

concepts General

ontology Domain

instances WWW

FormURL Annotations

Response pages

Conrmed anno- tations User query Response pages

(10)

1 Motivation

2 Probing

3 Two-Step Wrapper Induction

4 Experiments

5 Conclusion

(11)

Forms

Analyzing thestructure of HTML forms.

Authors

Title Year Page

Conference ID

Journal Volume Number

Maximum of matches

Goal

Associating to each form field the appropriatedomain concept.

(12)

First Step: Structural Analysis

1 Build acontext for each field:

labeltag;

idandnameattributes;

text immediately before the field.

2 Remove stop words,stem.

3 Match this context with the concept names, extended with WordNet.

4 Obtain in this waycandidate annotations.

(13)

First Step: Structural Analysis

1 Build acontext for each field:

labeltag;

idandnameattributes;

text immediately before the field.

2 Remove stop words,stem.

3 Match this context with the concept names, extended with WordNet.

4 Obtain in this waycandidate annotations.

(14)

First Step: Structural Analysis

1 Build acontext for each field:

labeltag;

idandnameattributes;

text immediately before the field.

2 Remove stop words,stem.

3 Match this context with the concept names, extended with WordNet.

4 Obtain in this waycandidate annotations.

(15)

First Step: Structural Analysis

1 Build acontext for each field:

labeltag;

idandnameattributes;

text immediately before the field.

2 Remove stop words,stem.

3 Match this context with the concept names, extended with WordNet.

4 Obtain in this waycandidate annotations.

(16)

Second Step: Confirm Annotations with Probing

For each field annotated with a conceptc:

1 Probe the field with nonsense word to get an error page.

2 Probethe field with instances of c (chosen representatively of the frequency distribution of c).

3 Compare pages obtained by probing with the error page (by using clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.

4 Confirm the annotation if enough result pages are obtained.

(17)

Second Step: Confirm Annotations with Probing

For each field annotated with a conceptc:

1 Probe the field with nonsense word to get an error page.

2 Probethe field with instances of c (chosen representatively of the frequency distribution of c).

3 Compare pages obtained by probing with the error page (by using clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.

4 Confirm the annotation if enough result pages are obtained.

(18)

Second Step: Confirm Annotations with Probing

For each field annotated with a conceptc:

1 Probe the field with nonsense word to get an error page.

2 Probethe field with instances of c (chosen representatively of the frequency distribution of c).

3 Compare pages obtained by probing with the error page (by using clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.

4 Confirm the annotation if enough result pages are obtained.

(19)

Second Step: Confirm Annotations with Probing

For each field annotated with a conceptc:

1 Probe the field with nonsense word to get an error page.

2 Probethe field with instances of c (chosen representatively of the frequency distribution of c).

3 Compare pages obtained by probing with the error page (by using clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.

4 Confirm the annotation if enough result pages are obtained.

(20)

1 Motivation

2 Probing

3 Two-Step Wrapper Induction

4 Experiments

5 Conclusion

(21)

Result Pages

Pages resulting from a given form submission:

share thesame structure;

set ofrecords with fields;

unknown presentation!

Goal

Building wrappers for a given kind of result pages, in a fully automatic, unsupervised, way.

Simplification: restriction to a do- main of interest, with somedomain knowledge.

(22)

Different Approaches to Information Extraction

(Chang et al., TKDE 2006)

Un-Supervised

GUI

Semi-Supervised

Wrapper Induction

System

Wrapper Test Page GUI

Un-Labeled Training Web Pages

User User

(23)

Motivation Probing Two-Step Wrapper Induction Experiments Conclusion

Annotation by domain knowledge

Automaticpre-annotationwith domain knowledge (gazetteer):

Entity recognizers for dates, person names, etc.

Titles of articles, conference names, etc.: those that are in the knowledge base.

(24)

Motivation Probing Two-Step Wrapper Induction Experiments Conclusion

Annotation by domain knowledge

Automaticpre-annotationwith domain knowledge (gazetteer):

Entity recognizers for dates, person names, etc.

Titles of articles, conference names, etc.: those that are in

(25)

Motivation Probing Two-Step Wrapper Induction Experiments Conclusion

Annotation by domain knowledge

Automaticpre-annotationwith domain knowledge (gazetteer):

Entity recognizers for dates, person names, etc.

Titles of articles, conference names, etc.: those that are in the knowledge base.

(26)

Annotation by domain knowledge

Automaticpre-annotationwith domain knowledge (gazetteer):

Entity recognizers for dates, person names, etc.

Titles of articles, conference names, etc.: those that are in

(27)

Unsupervised Wrapper Induction

Use the pre-annotation as the input of a structural supervised machine learningprocess.

Purpose: remove outliers, generalizeincomplete annotations.

table /articles tr /article td /title

token /title

#text

td /authors token /author

#text

(28)

Conditional Random Fields

Generalization of hidden Markov Models

Probabilisticdiscriminativemodel: models the probability of an annotationgiven an observable (different from generative models)

Graphical model: every annotation can depends on the neighboring annotations (as well as the observable);

dependencies measured through (boolean or integer) feature functions.

Features are automatically assigned a weight and combined

(29)

Conditional Random Fields for XML (XCRF)

Observables: various structural and content-based features of nodes (tag names, tag names of ancestors, type of textual content. . . ).

Annotations: domain concepts assigned to nodes of the tree.

Tree probabilistic model:

models dependenciesbetween annotations;

conditional independence:

annotations of nodes only depend on theirneighbors(and on observables).

Y1

Y2

Y3 Y4

Y5

Y6 Y7

Y8

Mostdiscriminativefeatures selected.

(30)

Architecture

ANNOTATED

LCA Analysis Posttreatment

Outlier Removal

Extract Features Gazetteer

XCRF

Train XCRF CORPUS

Extract Data enrich

Tokenization

Preprocessing Pretreatment

{

(31)

1 Motivation

2 Probing

3 Two-Step Wrapper Induction

4 Experiments

5 Conclusion

(32)

Experimental Setup

10 services of research publication databases.

Domain knowledge extracted from DBLP.

Forms analyzed and probed (5 probes per field and candidate annotation).

Induction of wrappers from training (unannotated) set of result pages, and evaluation of wrappers on test set of result pages.

(33)

Results for form analysis

Initial annot. Confirmed annot.

p(%) r(%) p(%) r(%)

Average 49 73 82 73

Good precision and recall.

Probing raises precision without hurting recall.

Remark

Much better results for distinguishing error and result pages by clustering according to the paths in the DOM tree than previous approaches.

(34)

Results for wrapper induction

Title Author Date Fg Fx Fg Fx Fg Fx

Average 44 63 64 70 85 76

Fg: F-measure (%) of the annotation by the gazetteer.

Fx: F-measure (%) of the annotation by the induced wrapper.

(35)

1 Motivation

2 Probing

3 Two-Step Wrapper Induction

4 Experiments

5 Conclusion

(36)

Summary

Important point

It is indeed possible to use content and structure together for automatic, unsupervised, information extraction!

better than content only (gazetteer);

better than structure only (RoadRunner).

Content is used to bootstrap a structure-based learning:

isn’t it what humans do to understand the structure of such pages?

(37)

Summary

Important point

It is indeed possible to use content and structure together for automatic, unsupervised, information extraction!

better than content only (gazetteer);

better than structure only (RoadRunner).

Content is used to bootstrap a structure-based learning:

isn’t it what humans do to understand the structure of such pages?

Not perfect (yet), should be possible to get much better!

(38)

Perspectives

Moreintelligent gazetteer: use NL features to extract noun phrases that look like titles?

A machine learning framework adapted to anoisy and

incompleteannotation, without overfitting: minimal-length description?.

Exploitprobabilitiesthat come with learned features to produce

(39)

Références

Documents relatifs

3 Compare pages obtained by probing with the error page (by using clustering along the DOM tree structure of the pages), to distinguish error pages and result pages. 4 Confirm

Compare pages obtained by probing with the error page (by clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.. Confirm the annotation

Compare pages obtained by probing with the error page (by clustering along the DOM tree structure of the pages), to distinguish error pages and result pages?. Confirm the annotation

Compare pages obtained by probing with the error page (by clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.. Confirm the annotation

Compare pages obtained by probing with the error page (by clustering along the DOM tree structure of the pages), to distinguish error pages and result pages.. Confirm the annotation

Jambon à l’os, jambon fumé, rosette, saumon fumé, mousse de thon, fromage frais aux fines herbes.

Ce document est une synthèse du rapport sur le prix et la qualité du service public de gestion des déchets disponible dans sa version complète sur notre site internet. ❶ I DENTITÉ

Ce document est une synthèse du rapport sur le prix et la qualité du service public de gestion des déchets disponible dans sa version complète sur notre site internet Réalisé