• Aucun résultat trouvé

Workshop preface

N/A
N/A
Protected

Academic year: 2022

Partager "Workshop preface"

Copied!
3
0
0

Texte intégral

(1)

XCBR: Second Workshop on Case-Based Reasoning for the Explanation of Intelligent Systems.

Workshop at the

27th International Conference on Case-Based Reasoning

(ICCBR 2019)

September, 2019, in Otzenhausen, Germany Bel´en D´ıaz-Agudo,

Juan A. Recio-Garc´ıa,

Ian Watson

(2)

Co-Chairs Bel´en D´ıaz-Agudo

University Complutense of Madrid, Spain Juan A. Recio-Garc´ıa

University Complutense of Madrid, Spain

Ian Watson, University of Auckland, New Zealand Programme Committee

Derek Bridge, University College Cork, Ireland Stelios Kapetanakis, University of Brighton, UK David Leake, Indiana University, USA

Santiago Ontan, Drexel University, USA Antonio A. Snchez Ruiz-Granados, UCM, Spain Barry Smyth, University College Dublin, Ireland David W. Aha Naval Research Laboratory, USA

(3)

Preface

The problem of explainability in Artificial Intelligence is not new but the rise of autonomous intelligent systems has created the necessity to understand how these intelligent systems achieve a solution, make a prediction or a recommenda- tion or reason to support a decision in order to increase users reliability in these systems. The goal of Explainable Artificial Intelligence (XAI) is to create a suite of new or modified AI techniques that produce explainable models that, when combined with e↵ective explanation techniques, enable end users to understand, appropriately trust, and e↵ectively manage the emerging generation of Artificial Intelligence systems. Case-Based Reasoning (CBR) systems have previous expe- riences in interactive explanations and in exploiting memory-based techniques to generate these explanations that can be successfully applied to the explanation of other AI techniques.

The workshop program includes an invited talk by David Aha introducing what’s happening in XAI today. Then we have organized 2 sessions with 5 pre- sentations and one discussion panel. This year most of the contributions are oriented to the practical experience of CBR for explanation in recommender systems including issues like explanation interfaces, evaluation and applications.

Besides, we have a contribution summarizing the recent trends in XAI, providing an overview on current approaches, methodologies and interactions.

We wish to thank all who contributed to the success of this workshop, es- pecially the authors, the Program Committee, and the editors of the workshop proceedings!

Bel´en D´ıaz-Agudo September 2019

Juan A. Recio-Garc´ıa Ian Watson

Références

Documents relatifs

literal in the body of R can be consistently assumed to be false and the assumption that the head becomes true does not lead to contradiction by calling derive and literal con.. Cases

Here, we propose building an AIS based classifier pipeline using XAI algorithms to identify deterministic feature patterns from data that can be used to improve the performance

The fir edi ion of he XI-ML E plainable and In erpre able Machine Learning ork hop a held on Sep ember 21, 2020 a he ​43rd German Conference on Ar ificial In elligence,

It is our pleasure to welcome you to the first edition of the International Workshop on Requirements Engineering for Artificial Intelligence (RE4AI’20), held in conjunction with

The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation methods..

We expose the XAI challenges of AI fields, their existing approaches, limitations and the great opportuni- ties for Semantic Web Technologies and Knowledge Graphs to push the

Thus, explana- tions for AI models are less likely to be off in case of the data missingness is because of MAR or MCAR but imputation for NMAR data could occasionally lead

To sum up, the visual interface supports the visual classification of the query case by three means: (1) the scatter plot shows the class as- sociated with the closest similar cases,