• Aucun résultat trouvé

Introduction to Interpretability in Machine Learning

N/A
N/A
Protected

Academic year: 2021

Partager "Introduction to Interpretability in Machine Learning"

Copied!
2
0
0

Texte intégral

(1)

RESEARCH OUTPUTS / RÉSULTATS DE RECHERCHE

Author(s) - Auteur(s) :

Publication date - Date de publication :

Permanent link - Permalien :

Rights / License - Licence de droit d’auteur :

Bibliothèque Universitaire Moretus Plantin

Institutional Repository - Research Portal

Dépôt Institutionnel - Portail de la Recherche

researchportal.unamur.be

University of Namur

Introduction to Interpretability in Machine Learning

Bibal, Adrien; Frenay, Benoît

Published in:

BENELEARN

Publication date:

2016

Document Version

Publisher's PDF, also known as Version of record

Link to publication

Citation for pulished version (HARVARD):

Bibal, A & Frenay, B 2016, Introduction to Interpretability in Machine Learning. in BENELEARN. Kortrijk, Belgium.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal ?

Take down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

(2)

Introduction to Interpretability in Machine Learning

Adrien Bibal adrien.bibal@unamur.be

Benoˆıt Fr´enay benoit.frenay@unamur.be

PReCISE, Faculty of Computer Science, University of Namur, rue Grandgagnage 21, 5000 Namur, Belgium Keywords: interpretability, comprehensibility, measures, machine learning models

Interpretability is considered as important in the liter-ature. Yet, measuring interpretability seems challeng-ing due to its subjective nature. This abstract presents the survey of the literature by Bibal and Fr´enay (2016). Several terms are used in the literature alongside inter-pretability. Usability is one of those terms. A model is not usable if it is rejected despite an acceptable ac-curacy. In the medical domain, Freitas (2014) noted that a simple, easy to read, decision tree can be refused because medical doctors may consider that a simple model cannot represent complex medical situations. Usability depends on the interpretability of the model to be measured. The same situation can be observed with justifiability (Martens et al., 2011) which bridges the gap between the description of the data made by the model and the knowledge of the application do-main: “does the model justify (or correspond to) the existing knowledge of the domain?”.

Interpretability is more fundamental and corresponds to the ability of a human to comprehend (Giraud-Carrier, 1998) or to understand (R¨uping, 2006) the model. Two ways to handle the measure of inter-pretability are proposed in the literature.

On the one hand, heuristics correspond to approx-imations of the human understanding made by the machine learning researcher (R¨uping, 2006), e.g. the model complexity. This approach is easy to formalise but can hardly compare models of different types. For instance, one cannot compare the number of nodes of a decision tree with those of a neural network. On the other hand, users can be considered in the evaluation of human comprehensibility. Some au-thors work with user-based surveys to assess the in-terpretability of models (Allahyari & Lavesson, 2011). In this methodology, measuring the interpretability consists in asking questions such as “do you find this model understandable?” or “is this model more un-derstandable than that one?”. This goes beyond the limits of the heuristics approach as users can compare

models of distinct types. However, the user-based ap-proach is limited by the difficulty to quantify inter-pretability from the answers of surveys and the exis-tence of different model representations.

Bibal and Fr´enay (2016) highlights gaps in the litera-ture. First, there are almost no links between the two approaches in the literature. The heuristics approach does not use the user-based approach for validation and the user-based approach does not try to extract heuristics that could be used to quantify interpretabil-ity. This is mostly due to the lack of research in the direction of the user-based surveys approach. Second, models considered as black boxes are neglected. The measure of interpretability mostly deals with white-boxes (e.g. decision trees and rule lists). However, one could argue that a very simple SVM may be more interpretable than a very complex decision tree. The measure of interpretability needs more investigation and a grey-scale measure taking advantage of the two approaches presented here could be developed.

References

Allahyari, H., & Lavesson, N. (2011). User-oriented assessment of classification model understandability. Proc. SCAI (pp. 11–19).

Bibal, A., & Fr´enay, B. (2016). Interpretability of ma-chine learning models and representations: an intro-duction. Proc. ESANN 2016 (pp. 77–82).

Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD Explo-rations Newsletter, 15, 1–10.

Giraud-Carrier, C. (1998). Beyond predictive accu-racy: what? Proc. ECML (pp. 78–85).

Martens, D., Vanthienen, J., Verbeke, W., & Baesens, B. (2011). Performance of classification models from a user perspective. Dec. Support Syst., 51, 782–793. R¨uping, S. (2006). Learning interpretable models.

Références

Documents relatifs

6 Obviously, the argument assumes the existence of scalar fields which induce the phe- nomenon of spontaneous symmetry breaking. We can construct models in which the role of the

Nous avons, donc, invité cette famille pour des entretiens cliniques familiaux dans le but de faire discuter les membres entre eux ; ce qui dévoilerait la typologie

C’est la conception d’un centre de recherche FAB-LAB qui peut s’implanter dans la ville de Tlemcen ; c’est une ville a des caractéristique universitaire et industrielle qui

De la même manière qu’un comportement donné peut être envisagé sous l’angle exclusif du rapport à soi ou également comme ayant un impact sur autrui, un

Table 2: Examples of English domain sentenes, with system translations into Frenh, Arabi and Japanese. Our metris measure the extent to whih

Given a learning task defined by a space of potential examples together with their attached output values (in case of supervised learning) and a cost function over the

Combining the tools and the approach facilitated the creation of a learning environment in which use of technology and the choice of activities centered on a problem provided

When presenting cognitive-based arguments which could support the use of multimedia aids for vocabulary learning we referred to two different approaches: the first one postulates