• Aucun résultat trouvé

Relational Machine Learning

N/A
N/A
Protected

Academic year: 2022

Partager "Relational Machine Learning"

Copied!
1
0
0

Texte intégral

(1)

Relational Machine Learning

Filip ˇZelezn´y

Czech Technical University in Prague, Czech Republic

Abstract. I will explain the main concepts of relational machine learning, or more precisely, those parts of it employing logic as the knowledge-representation formalism.

The talk will not cover other relational approaches such as graph-mining. I will follow what I consider the three main stages of the field’s historical development. First, I will visit the roots of relational learning lying in the area of inductive logic programming.

Here, one learns logical theories from examples, formalizing the problem as search in a clause subsumption lattice. A newer stream of research called statistical relational learning extended the logical underpinnings with probabilistic inference. I will illustrate this with an example of a logical graphical probabilistic model. Most recently, relational learning has received a new impetus from the current revival of (deep) neural networks.

I will exemplify some promising crossovers of the two fields, including the paradigm of Lifted Relational Neural Networks conceived in my lab.

Références

Documents relatifs

• VISUALIZE (generally in 2D) the distribution of data with a topology-preserving “projection” (2 points close in input space should be projected on close cells on the SOM).

Since the objective of the co-design project involved the design of ML based applications for diabetes self-management, AI researchers used different methods to explain ML

Viewed in a simplified way, the analysis leading to the discovery of a new particle starts with the acquisition of experimental data; then a classifier is used to determine

The good results obtained with the Gaussian representations learning framework on the heterogeneous graph node classification motivated us to explore its use for another task,

Domain knowledge in the form of deep knowledge of the relationships be- tween different features, the nature of the data, the difficulty and cost of labelling as well as

With an efficient solver for QUBO and the Ising model at hand, solving a specific optimization problem reduces to finding an efficient embedding into the domain of n-dimensional

By using the textual description of the failure, after pre-processing of the text and feature extraction, we experimented with three different machine learning classifiers: K

We identify two situations where Richardson interpolation can be useful: (1) when the hyperparameter is the number of iterations of an existing iterative optimization algorithm,