• Aucun résultat trouvé

Réseaux à convolution

N/A
N/A
Protected

Academic year: 2022

Partager "Réseaux à convolution"

Copied!
18
0
0

Texte intégral

(1)

Réseaux à convolution

Réseaux de neurones profonds

Limites de la BP

Spécialisation

Indépendance à la position

Conservation du gradient

Max pooling

(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012).

"Imagenet classification with deep convolutional neural networks". In Advances in neural information processing

systems (pp. 1097-1105).

1.2 million high-resolution images into the 1000 different classes.

On the test data, top-1 error rate: 37.5% ; top-5 error rate : 17.0%

60 million parameters, 650,000 neurons, five convolutional layers, some of which are followed by max-pooling layers, and three

fully-connected layers with a final 1000-way softmax (distribution which sums to 1).

(18)

Réseaux multicouches (en force)

Ciresan, D. C., Meier, U., Gambardella, L. M., &

Schmidhuber, J. "Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition", 2010.

Couches cachées : 2,500, 2,000, 1,500, 1,000 et 500 neurones.

Alpha : 10

-6

Références

Documents relatifs

 pushing forward to weakly supervised learning the problem Medical Image Segmentation of Thoracic Organs at Risk in CT-images..

Artificial intelligence has occupied the place of the sphere of research and today includes the paradigm of learning – machine learning [12; 13], which differs from the

In general, the spectrogram of bird sound audio is regarded as the input and the model would treat the bird-call identification task as an image classification problem.. This

As deep learning models show an impressive ability to recognize patterns in images, many researchers propose deep learn- ing based steganalysis methods recently.. [11] reported

As seen in figure 6, we reduced the training time, and the difference of time reduced became higher when we increased the number of iteration for each ap- proach, in which we

We present a three-layer convolutional neural network for the classification of two binary target variables ’Social’ and ’Agency’ in the HappyDB corpus exploiting lexical density of

After the training, new class weights were generated using the error rates of the model on the test set and another model was trained with the modified class weights.. We repeated

Furthermore predictions of the two best snapshots per model (regarding performance on the validation set) were chosen for averaging instead of just one, resulting in