• Aucun résultat trouvé

Preface

N/A
N/A
Protected

Academic year: 2022

Partager "Preface"

Copied!
3
0
0

Texte intégral

(1)

Augmenting Intelligence with

Humans-in-the-Loop (HumL@IWSC2018)

Anna Lisa Gentile1, Lora Aroyo2, Gianluca Demartini3, and Chris Welty4

1 IBM Research Almaden, CA, USannalisa.gentile@ibm.com

2 Vrije Universiteit Amsterdamlmaroyo@gmail.com

3 University of Queensland, Australiademartini@acm.org

4 Google Research, UScawelty@gmail.com

This volume contains the papers presented at HumL@ISWC2018, the second in- ternational workshop on Augmenting Intelligence with Humans-in-the-Loop held on October 9, 2018 in Monterey, CA in conjunction with the 17th International Semantic Web Conference (http://iswc2018.semanticweb.org/). Humans-in-the- loop is a model of interaction where a machine process and one or more humans have an iterative interaction. In this paradigm the user has the ability to heavily influence the outcome of the process by providing feedback to the system as well as the opportunity to grab different perspectives about the underlying domain and understand the step by step machine process leading to a certain outcome.

Amongst the current major concerns in Artificial Intelligence research are be- ing able to explain and understand the results as well as avoiding bias in the underlying data that might lead to unfair or unethical conclusions. Typically, computers are fast and accurate in processing vast amounts of data. People, however, are creative and bring in their perspectives and interpretation power.

Bringing humans and machines together creates a natural symbiosis for accu- rate and unbiased interpretation of data at scale. The goal of this workshop is to bring together researchers and practitioners in various areas of AI and Semantic Web to explore new pathways of the humans-in-the-loop paradigm.

The workshop program covers diverse application domains and problems where humans-in-the-loop approaches have been studied. Example applications include language models for dictionary expansion, hiring decisions, knowledge graph curation, and active learning, which is a perfect example of humans-in- the-loop approaches that aim at optimally combining humans and algorithms together. Other aspects considered in our proceedings are the interaction of mul- tiple humans when supporting algorithmic decision making and the selection of the right compensation level for humans involved in data annotation tasks.

(2)

Table of Contents

Crowd-sourcing Updates of Large Knowledge Graphs. . . 1 Albin Ahmeti, Victor Mireles, Artem Revenko, Marta Sabou and Mar- tin Schauer

Interactive Dictionary Expansion using Neural Language Models. . . 7 Alfredo Alba, Daniel Gruhl, Petar Ristoski and Steve Welch

Making Better Job Hiring Decisions using ”Human in the Loop”

Techniques. . . 16 Christopher G. Harris

Use of internal testing data to help determine compensation for

crowdsourcing tasks. . . 27 Michael Lauruhn, Paul Groth, Corey Harper and Helena Deus

Rank Scoring via Active Learning (RaScAL) . . . 38 Jack O’Neill, Sarah Jane Delany and Brian Mac Namee

ESO-5W1H Framework: Ontological model for SITL paradigm. . . 51 Shubham Rathi and Aniket Alam

vi

(3)

Program Committee

Lora Aroyo Vrije Universiteit Amsterdam Michele Catasta Stanford University

Irene Celino CEFRIEL

Anni Coden IBM, TJ Watson Research Center Philippe Cudre-Mauroux U. of Fribourg

Gianluca Demartini The University of Queensland

Djellel Difallah NYU

Giorgio Maria Dinunzio University of Padua

Anca Dumitrache Vrije Universiteit Amsterdam Anna Lisa Gentile IBM

Daniel Gruhl IBM Almaden Research Center Oana Inel Vrije Universiteit Amsterdam

Ismini Lourentzou University of Illinois at Urbana - Champaign Marta Sabou Vienna University of Technology

Cristina Sarasua University of Zurich

Maja Vukovic IBM

Chris Welty Google

Jie Yang eXascale Infolab, University of Fribourg Amrapali Zaveri Maastricht University

vii

Références

Documents relatifs

Given a topic (and associated article) in one language, identify relevant snippets on the topic from other articles in the same language or even in other languages.. In the context

Language restricted Since we know the language of the topic in each of the translations, and the intention of the translated topic is to retrieve pages in that language, we may

When we examined the assessments of the answers to the three different question types, we noticed that the proportion of correct answers was the same for definition questions (45%)

For both English and Portuguese we used a similar set of indexes as for the monolingual runs described earlier (Words, Stems, 4-Grams, 4-Grams+start/end, 4-Grams+words); for all

In effect, we conducted three sets of experiments: (i) on the four language small multilingual set (English, French, German, and Spanish), (ii) on the six languages for which we have

The Web Answer stream, for example, seemed to perform better than other streams on questions for which the answer was a date; the Pattern and Table Lookup streams had very

As to the effect of stemming on retrieval performance for languages that are morphologically richer than English, such as Dutch, German, or Italian, in our experiments for CLEF 2001

Type M (Morphological) The title and the description field of the topic are used to generate the retrieval query (this was a mandatory requirement to be met by at least one of