• Aucun résultat trouvé

Modeling Reasoning Mechanisms by Neural-Symbolic Learning

N/A
N/A
Protected

Academic year: 2022

Partager "Modeling Reasoning Mechanisms by Neural-Symbolic Learning"

Copied!
2
0
0

Texte intégral

(1)

Invited Keynote Talk

Modeling Reasoning Mechanisms by Neural-Symbolic Learning

Kai-Uwe Kühnberger, University of Osnabrück, Germany, kkuehnbe@uos.de

Currently, neural-symbolic integration covers – at least in theory – a whole bunch of types of reasoning: neural representations (and partially also neural-inspired learning approaches) exist for modeling propositional logic (programs), whole classes of manyvalued logics, modal logic, temporal logic, and epistemic logic, just to mention some important examples [2,4]. Besides these propositional variants of logical theories, also first proposals exist for approximating “infinity”

with neural means, in particular, theories of first-order logic. An example is the core method intended to learn the semantics of the single-step operator TP for first-order logic (programs) with a neural network [1]. Another example is the neural approximation of variable-free first-order logic by learning representations of arrow constructions (which represent logical expressions) in the Rn using Topos constructions [3].

Although these examples show a certain success of neural-symbolic learning and reasoning research, there are several non-trivial challenges. First, there exist a variety of frameworks that seem to have rather different and seemingly incompatible foundations. Second, potential application domains where the strengths of neural-symbolic integration could be documented and its potential be shown are not really known. Third, the conceptual understanding of the cognitive relevance and the cognitive plausibility of neural-symbolic learning and reasoning should be clarified. In this talk, I will address these questions and propose some ideas for answers. I will sketch general assumptions of solutions for the neural- symbolic modeling of a variety of logical reasoning mechanisms. Then I will propose some application domains where neural-symbolic frameworks can be successfully applied. I will finish the talk with some speculations concerning cognitively relevant implications and the degree of cognitive plausibility of neural- symbolic learning and reasoning in general.

References

[1] S. Bader, P. Hitzler, S. Hölldobler and A. Witzel: A Fully Connectionist Model Generator for Covered First–Order Logic Programs. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, 2007, pp. 666–

671.

[2] A. d’Avila Garcez, L. Lamb & D. Gabbay: Neural-Symbolic Cognitive Reasoning, Cognitive Technologies, Springer, 2008.

4

(2)

[3] H. Gust, K.–U. Kühnberger & P. Geibel: Learning Models of Predicate Logical Theories with Neural Networks based on Topos Theory. In P. Hitzler and B.

Hammer (eds.): Perspectives of Neuro–Symbolic Integration, Studies in Computational Intelligence (SCI) 77, Springer, 2007, pp. 233–264.

[4] E. Komendantskaya, M. Lane & A. Seda: Conenctionist Representation of Multi-Valued Logic Programs. In P. Hitzler and B. Hammer (eds.): Perspectives of Neuro–Symbolic Integration, Studies in Computational Intelligence (SCI) 77, Springer, 2007, pp. 283–313.

5

Références

Documents relatifs

For the latter claim, two paradigmatic examples are pre- sented: logic programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation

We illustrate the applicability of the framework on a case study based on RoboCup rules: within this working example, we compare the learning capacity of a network built with

In example at the second step of the experiment the starting knowledge base contains 22 rules and the network tries to learn 7 rules during the training phase.. During each step

On the one hand, the architecture contains a symbolic knowledge base to represent norms and on the other hand it contains a neural network to rea- son with norms.. The

The main aim of this article is showing that, if a proto-concept is given simply in terms of the ability to make the appropriate distinctions, then there are stimulus-response

With the recent advances in deep learning, i.e., in subsymbolic approaches, questions how to interface these methods with symbolic data or systems resurface with renewed urgency,

This paper presents a new neural-symbolic reasoning approach based on neural sharing of multi- space representation for a coded portion of first-order formulae suitable for

The active magenta S nodes could be used to learn that the blackboard should activate the Object conditional connection, because John (noun) is an Agent and