• Aucun résultat trouvé

Visual attention saccadic models: taking into account global scene context and temporal aspects of gaze behaviour

N/A
N/A
Protected

Academic year: 2021

Partager "Visual attention saccadic models: taking into account global scene context and temporal aspects of gaze behaviour"

Copied!
2
0
0

Texte intégral

(1)

HAL Id: hal-01391751

https://hal.inria.fr/hal-01391751

Submitted on 4 Nov 2016

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Visual attention saccadic models: taking into account global scene context and temporal aspects of gaze

behaviour

Antoine Coutrot, Olivier Le Meur

To cite this version:

Antoine Coutrot, Olivier Le Meur. Visual attention saccadic models: taking into account global scene context and temporal aspects of gaze behaviour. ECVP 2016 - European Conference on Visual Perception, Aug 2016, Barcelona, Spain. �hal-01391751�

(2)

VISUAL ATTENTION SACCADIC MODELS

Taking into account global scene context and temporal aspects of gaze behaviour

Antoine Coutrot

a

* & Olivier Le Meur

b

a

CoMPLEX, University College London, United Kingdom

b

IRISA, University of Rennes 1, France

* corresponding author: a.coutrot@ucl.ac.uk

Conclusions

Visual attention saccadic models take into account the

temporal dimension of visual exploration. They provide

an efficient framework to integrate in a data-driven

fashion variables as different as bottom-up saliency,

spa-tial bias, context and scene composition, as well as

ocu-lomotor constraints. They will allow to tailor saliency

model for specific populations (e.g. for different age

groups, tasks, states of health...).

Figure 1 - Conceptual difference between classic saliency models and saccadic models. Classic saliency models output

a 2D static saliency map (or heatmap) whereas saccadic mo-dels compute visual scanpaths from which static as well as dy-namic saliency maps can be computed.

Figure 2 - Saccadic model flow chart. Predicted scanpaths result from the combination of

three components: 1- bottom-up saliency map, 2- viewing biases and 3- memory mechanism.

Model Evaluation

Evaluation is performed over Bruce’s dataset (120

na-tural images) [4]. For each image, 100 scanpaths of 10

fixations are computed from the saccadic model, and

added up into a heatmap. We repeat the operation with

the

pB

distribution from 4 visual categories. These

heat-maps are compared with the output of a classic

bottom-up only saliency model (a combination of [8] and [9]).

CC SIM EMD 0.45 0.5 0.55 0.6 0.65 0.7 Scor e

Bottom-up only

p

naturalscenes

B

p

webpages B

p

conversationsB

p

landscapesB

Figure 3 - Scanpath generation. (a) Original image and (f) its saliency map computed by

the GBVS model (Harel et al., 2006). (b)–(e) Sequence of fixations (g)–(j) Temperature plot of the joint probability pB weighted by the saliency pBU. (k)–(n) Temperature plot of the me-mory effect and inhibition of return pM . Adapted from [1].

References

[1] Le Meur O, Liu Z. Vision Research 2015.

[2] Le Meur O, Coutrot A. Vision Research 2016. [3] Coutrot A, Guyader N. Journal of Vision 2014.

[4] Bruce, N. D., & Tsotsos, J. K. Journal of vision 2009.

[5] Kootstra, G., de Boer, B., & Schomaker, L. R. Cognitive computation 2011.

[6] Judd T, Ehinger K, Durand F, Torralba A. IEEE International Conference on Computer Vision 2009. [7] Shen, C., & Zhao, Q. European Conference on Computer Vision 2014.

[8] Harel J, Koch C, Perona P. Advances in Neural Information Processing Systems 2006. [9] Riche N, Mancas et al., Signal Processing : Image Communication 2013.

Introduction

How do saccadic models work?

Let x

t−1

be a fixation point at time t-1. The next fixation point xt is determined by

sampling the 2D discrete conditional probability p(x | x

t−1

)

p(x | x

t-1

) = p

BU

(x) . p

B

(d(x,x

t-1

), φ(x,x

t-1

)) . p

M

(x,t | T)

To implement the stochastic nature of visual exploration, Nc points are randomly

drawn from p(x | x

t-1

). The next fixation point corresponds to the highest value.

p

BU

p

B

(t=1)

p

B

(t=2)

p

B

(t=3)

p

B

(t=4)

p

M

(t=1)

p

M

(t=2)

p

M

(t=3)

p

M

(t=4)

Saccadic models can be easily tuned

to emulate a specific visual behavior.

For instance the joint distribution pB can

be adapted to the semantic visual

cate-gory of the stimulus.

Saccadic models output plausible

vi-sual scanpaths, i.e. having the same

peculiarities as human scanpaths.

For a probabilistic tour of visual attention models, cf. Boccignone’s review, arXiv:1607.01232, 2016.

Landscapes (video) Conversations (video)

Natural scenes (static image) Webpages (static image)

Figure 4- Joint distribution of saccade orientations and amplitudes according to 4 visual categories of stimulus.

Landscapes and conversations videos are from [3]; natural static scenes are from [4,5,6]; webpages are from [7].

pB can also be adapted to be

spatially-variant. This allows to naturally take into

account the center bias.

Figure 5 - pB is spatially-variant. Distributions of saccade

orientations and amplitudes are computed over each base frame (1-9). Extracted from [2].

Figure 6- Evaluation is performed with 2 similarity metrics (CC

and SIM) and one dissimilarity metric (EMD). EMD has been sca-led down by a factor 4. Error bars represent standard errors.

For more details, cf. Le Meur O, Coutrot A. Introducing context-dependent and spatially-variant viewing biases in saccadic models. Vision Research 2016.

Références

Documents relatifs

Thus, we obtained the twodimensional equilib rium equations for plates and shells with taking into account the transverse shear and the presence of sur face stresses. We presented

Data augmentation for object detection and segmen- tation. Data augmentation is a major tool to train deep neural networks. It varies from trivial geometrical trans- formations such

While many accessible geo-technologies can be used to teach geography, we argue that they are not sufficient: Indeed, they mainly focus on substituting vision with another

The goal of this paper is to present the association of a classical graph kernel [3], combined with a new graph kernel which takes into account chirality [1, 2], in order to predict

dependent saccadic model with those obtained by considering age-independent distribution, uniform joint distribution and spatially-invariant distribution. We perform these tests by

Another striking difference is that while in the across hemifields configuration, the temporal dynamics of attentional unfolding is very similar between shift and stay conditions

These obtained values of K s are in accordance with the initial assumption of the interface bonding condition between the asphalt layers of the two investigated pavement

The computational cost is dependent of the input image: a complex image (in the meaning of filter used in the SNN) induced a large number of spikes and the simulation cost is high..