• Aucun résultat trouvé

Shaping the Dynamics of Recurrent Neural Networks by Conceptors

N/A
N/A
Protected

Academic year: 2022

Partager "Shaping the Dynamics of Recurrent Neural Networks by Conceptors"

Copied!
1
0
0

Texte intégral

(1)

Shaping the Dynamics of Recurrent Neural Networks by Conceptors

Herbert Jaeger

Jacobs University Bremen Campus Ring 28759 Bremen

Germany

Summary

The human brain is a dynamical system whose extremely complex sensor- driven neural processes give rise to conceptual, logical cognition. Understanding the interplay between nonlinear neural dynamics and concept-level cognition remains a major scientific challenge. Here I propose a mechanism of neurody- namical organization, called conceptors, which unites nonlinear dynamics with basic principles of conceptual abstraction and logic. It becomes possible to learn, store, abstract, focus, morph, generalize, de-noise and recognize a large number of dynamical patterns within a single neural system; novel patterns can be added without interfering with previously acquired ones; neural noise is automatically filtered. Conceptors may help to explain how conceptual-level information pro- cessing emerges naturally and robustly in neural systems, and may help to re- move a number of roadblocks in the theory and applications of recurrent neural networks.

Proceedings of the KI 2015 Workshop on Formal and Cognitive Reasoning

3

Références

Documents relatifs

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

Thereby, quaternion numbers allow neural network based models to code latent inter-dependencies between groups of input features during the learning process with fewer parameters

How to construct Deep Recurrent Neural Networks Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio.. International Conference on Learning

Figure 6: Points where errors in the detection of the initial pulse shape occur (subplot a1) and map of estimation error values on the normalized propagation length

It is important to mention that activating any cortical column different to the cortical columns involved in coding for the currently experienced situation (i.e., the setting

We have been able to repli- cate these findings on random recurrent neural networks (RRNN) with a classical hebbian learning rule [7].. Following Amari [1], and using a

Fur- thermore, since the challenge tasks involve “in the wild” forms of classic computer vision problems (e.g., image classification, detection, and semantic segmentation),

We also chose the Simple Event Model Ontology (SEMO) to describe events and stimuli which can stimulate the network’s components and integrated the Time Ontology (TO) to study