• Aucun résultat trouvé

Introducing Web Fragments An exploration of web archives beyond the webpages

N/A
N/A
Protected

Academic year: 2022

Partager "Introducing Web Fragments An exploration of web archives beyond the webpages"

Copied!
30
0
0

Texte intégral

(1)

Quentin Lobbé (LTCI, Télécom ParisTech & Inria Paris) DBWeb seminar – May 31, 2017

Introducing Web Fragments

An exploration of web archives beyond the webpages

(2)

10.000 migrant websites crawled, categorized and organized among 30 e-diasporas

The e-diasporas Atlas

> A collection of online migrant collectives

A migrant web site is a website created or managed by migrants and/or that deals with them

An e-Diaspora is a directed network of migrant websites linked by url

| yabiladi.com

| larbi.org

| marocainsdumonde.gov.ma

| moroccoboard.com

(3)

The e-diasporas Atlas

> A tool for sociological analysis

yabiladi.com

(4)

The e-diasporas Atlas

> A tool for sociological analysis

yabiladi.com

Associations

Institutions Blogs

(5)

Facing the evolutions of e-Diasporas ...

> death of blogs

> new link yabiladi.com

> new website

> alternative spaces of expression

(6)

… we build a corpus of web archives

> To keep a trace of the evolutions of websites

> time 1 > time 2

page 1 page 2 page 1 page 2

page 3

> Our corpus is a 70 To web archive, categorized by e-diasporas corpus, crawled weekly or Monthly, between 2010 and 2015 hosted at the INA

record 1 record 2

(7)

Our original research questions

> Considering the e-Diasporas archived corpus

Can the structure and content of the archived e-Diasporas be permeable to the effects of shocks and external events such as political and social mobilizations?

> Considering any archived corpus

How can we follow traces through web archives in order to deal with a given event and its genesis by restoring it in the dual temporality of the web and the real?

(8)

> focusing on the particular case of yabiladi.com

The naive approach

a hub at the center of the network

yabiladi.com

an ancient and hybrid website

forum

news dating

videos since 2002

> 2.8 Millions of archived pages

(9)

The naive approach

> considering all the archived pages as traces of activities on the website

Number of new archived pages by day

> Are those peaks and valleys relevant ?

(10)

The naive approach

> considering all the archived pages as traces of activities on the website

(11)

Web archives are not direct traces of the web

> We saw what we call a crawl legacy effect Continuous Web

Discrete Archives

download

date 1 download

date 2 download

date 3

> web archives should be considered as direct traces of the crawler

(12)

We propose to conduct an exploratory analysis of web archives which would go beyond the level of the webpages

To avoid the crawl legacy effect

(13)

The original scale of web archives is the webpage

> what can we learn from the structure of web archives files?

.WARC

data

download date crawler date

meta

html content

data

download date crawler date

meta

html content

t1 t2

.DAFF

data

download date crawler date

meta

html content

download date crawler date

meta

t1 t2

> by definition, web archives are built on top of webpages

(14)

Archiving is all about selecting and destroying

> "Boulevard du Temple", Louis Daguerre, 1838

> as webpages change over time

> structural changes

move, copy, delete, inserte, update …

> attribute changes css, font …

> type changes <div> to <p>

> semantic changes

(15)

Archiving on top of webpages goes with many challenges

> Crawler blindness and archive quality

edition dates

download dates archived periods

crawler dates

> Web archiving goes with construction locks

(16)

Archiving on top of webpages goes with many challenges

> Archive consistency across pages

> Web archiving goes with navigation locks p1 changes

p2 changes

p1 & p2 archives

p1 p2

href ?

(17)

Archiving on top of webpages goes with many challenges

> Pages with archive-like content

> Archiving goes with discrete and continuous interpretation locks p1 changes

p1 archives

(18)

To face or reduce these challenges

We propose to build a new entity from based on web archives called web fragments

meta data

?

web page

(19)

The web fragment

> A structured part of a webpage with high informationing contents

data page

crawler date

meta

page content download date

data frag

edition date author frag content

data frag

edition date author frag content

an article

a comment a news item

> New structure for web archives

(20)

Finding web fragments

> We must see a webpage as a front & back end object

<div id="com-" class="news news_plus" style="padding-top:0px;"> <div class="effet_special"> <div class="com-header"> <div class="comment-subject"> <div class="icone-comment iconuser_m" style=""> </div> </div> <div class="com-info"> <a style="font-weight:400;" href="">Kim</a> <br> <span class="com-auteur">08 mai 2017 à 12h28</span> </div> </div> <span id="nombre-" style="display:none;"></span> <div class="buzz"> </div> </div> <div class="com-content" id="content-comment8537568"> Blabla </div> <div style="foat:left;width:100%;"> </div> </div>

Kim 08 mai 2017 à 12h28 Blabla

front end

back end screen

Related works : - a fat-file

- or an unordered tree - or an ordered tree

(21)

Finding web fragments

<div id="com-8537568" class="new comment">

<div class="effet_special">

<div class="com-header">

<div class="com-info">

<a class="com-author" href="/profil/24368/kim.html">Kim</a>

<br>

<span class="com-date">le 08 mai 2017 à 12h28</span>

</div>

</div>

<span id="nombre-" style="display:none;"></span>

<div class="buzz">

</div>

</div>

<div class="com-content" id="content-comment8537568">

blabla

</div>

<div style="foat:left;width:100%;">

</div>

</div>

depth

sequence

> A webpage is a 2D hierarchical list of HTML nodes

> Nodes are categorized among : title, author, date and text

(22)

Finding web fragments

1, Select nodes in DOM 2, Group in fragments 3, Group by list of fragments

> Nodes are incrementally grouped into web fragments using ad-hoc rules

[ U text ] or [ text U _text ] or [ title U text ] or [ date U _text ] or [ author U date ] ...

text text title author text author date text author date text

[ text text ]

[ title author text ] [ author date text ] [ author date text ]

[ text text ]

[ title author text ]

[ author date text ], [ author date text ]

> Algorithm

<h1 id = ''title'' class = ''title_comment''> Hello archives </h1>

> Nodes are selected based on markup & class & id using regex

(23)

Rethinking archive challenges using web fragments

> Crawler blindness can be reduced and archive quality increased

edition date 2

edition date 1 download date

Yabiladi's older fragments go back to 2003

> We introduce a more permissive archive consistency based on fragments and user requests

page 1

page 2

href stable fragment

new fragment stable fragment

(24)

Rethinking archive challenges using web fragments

> Pages with archive-like content is no more a problem with web fragments as a search unit base

Sharing the same id (sha256)

Now let's see how we can concretely conduct an exploratory archive analysis ...

> Web fragments help us expanding web archives beyond web pages

(25)

Exploratory analysis of Web archives

> Following John Wilder Tukey's work

Acquire Parse Filter Mine Represent Refine Interprete

An iterative process that is deliberately part of a logic of observation, discovery and astonishment

(26)

yabiladi.com

Archives extraction engine

Acquire Parse Filter Mine Represent Refine Interprete

Crawler

.DAFF

data page

crawler date

meta

page content download date

data frag

edition date author frag content

ArchiveMiner Fragments

Extractor

External Resources

> The Web Archives Explorer (part 1)

(27)

Archives exploration engine

Acquire Parse Filter Mine Represent Refine Interprete

Full text Facet Ngrams

ArchiveSearch ArchiveViz

Index of Events Index of Pages

& Fragments data page

crawler date

meta

page content download date

data frag

edition date author frag content

> The Web Archives Explorer (part 2)

(28)

The validation of web fragments

Acquire Parse Filter Mine Represent Refine Interprete

> Using an event detection system

1. threshold-based detection

2. identification with titles of news articles 3. fields and experts interpretations

> Let's see the Web Archives Explorer in action

video presentation for CIKM2017

(29)

Going deeper through the definition of web fragment

> A more abstract & pluridisciplinar characterization of web fragments

structural dimension

time accuracy dimension

visual dimension

political dimension

diasporic dimension psychological dimension

editing dimension

> More validation process based on thematical workshops (such as event detection) and field interpretations

(30)

Thank you!

Questions?

Hard work in progress here !

Reading :

- Information extraction - digital edition

- history of the web

- the concept of Rhizome developed by Deleuze and Guattari

Références

Documents relatifs

The aim of this talk is to present a detailed, self-contained and comprehensive account of the state of the art in representing and rea- soning with fuzzy knowledge in Semantic

Additional, the voice-based radio content on this radio platform might be linked to other data sources on the Web, enabling community radios in Africa to become an interface to the

Figure 3 shows a rough architecture of our approach: Part b) on the right hand side of the figure depicts the components of our client-side rule engine. Multiple event sources

secondly, given the speed of change and hype in Web technologies, it is difficult to provide up-to-date learning materials (as books); thirdly, as is true for

Using the declarative Silk - Link Specification Language (Silk-LSL), data publishers can specify which types of RDF links should be discovered between data sources as well as

Todays major search engines provide fast and easy access to existing Web pages, however only little attention has been paid to provide a similar easy and scalable access to

While the initial work described in [13] provided one of the discovery engines that implemented the “Lightweight” Set-Based discovery approach to the developer such that the Goals

The FooCA application builds and displays the entire formal context derived from the query terms returned from the search engine.. The formal context is built without the