• Aucun résultat trouvé

Reaction times

N/A
N/A
Protected

Academic year: 2022

Partager "Reaction times"

Copied!
34
0
0

Texte intégral

(1)

T h e S u ff ic ie n t F ea tu re M o d el A p p li ed t o t h e A u to m at iz at io n o f a V is u al a n d M em o ry S ea rc h T as k

By Denis cousineau Cousined@PSY.UMontreal.CA

(2)

O v er v ie w o f th e P re se n ta ti o n

1- Basic findings in the memory and visual search task 2- Overview of the Sufficient Feature Model (SFM) 3- Results of a Computer Simulation 4- Further Results 5- Future Direction of Research

(3)

1 - B as ic F in d in g s in t h e M em o ry an d V is u al S ea rc h T as k

Two loads are manipulated: Memory load and Display load We measure the reaction time; subjects must keep their error rates lower than 5%

LHBZ H ·*The answer is: “Present”

(4)

•In consistent mapping conditions (CM), the load effect vanishes with practice: •In varied mapping conditions (VM), the learning is small, the load effect remains large: 124 Display size Rea

cti on ti mes

M=1M=2M=4

124 Display size

Rea cti on ti mes

M=1M=2M=4

(5)

2 - T h e S u ff ic ie n t F ea tu re M o d el (S F M ): B as ic A ss u m p ti o n s

The SFM has three basic assumptions: 1) comparison mechanism: based on non-atomic representation of stimuli (feature representation) 2) learning mechanism: based on the reduction of information using: Ordering of the feature tests based on diagnosticity Reducing the number of feature tests performed 3) limited-capacity handling of features (an attentional window)

(6)

D ia g ra m o f th e D ec is io n M ec h an is m

HLRLB ZG

Memory LoadDisplay Load (11, 6, 2, 7), (11, 2), (5, 2, 1, 7) (11, 2), (11, 1, 9), (2, 1), (2, 1, 7) Attentional Window

11, 5, 6, 2, 1, 9, 7, 3, 8, 4, 10, 12

Feature Test list

+ -

diagnosticity Useless features

(7)

T h e S u ff ic ie n t F ea tu re M o d el (S F M ): P ar am et er s

The SFM has three parameters: %Error accepted: with 5%, it accepts 1 error out of 20 trials Size of the attentional window: it must be large enough to contain the features of the most complex character Character composition: we used simplified characters, composed of lines only

(8)

C h ar ac te rs U se d i n t h e S im u la ti o n

Inspired by McClelland and Rumelhart (1981) The character “2” is described by the list (1, 4, 6, 8, 12)

(9)

T h e S u ff ic ie n t F ea tu re M o d el (S F M ): C o m p u te r S im u la ti o n

•The SFM has no free numerical parameters •It measures the number of features that went under the attentional windows •The simulation is trained by practicing on the same trials as human did •There is no forgetting in the simulation

(10)

3 - B eh av io r o f th e S F M

Results of the simulation are in this order: •Learning curves data •Mean number of features considered •Standard deviation of the number of features considered They are compared to two groups of 4 human subjects, one trained for 12 blocks in CM condition, the other trained in VM condition

(11)

L ea rn in g C u rv e

The ordering of the feature tests results in a faster rejection of distractors Therefore, there is a visible learning curve both in CM and in VM conditions

(12)

CM 0

100

200

300

400

500

600

700

800

900

1000 123456789101112 Block

Mill ise co nd s

VM 0

100

200

300

400

500

600

700

800

900

1000 123456789101112 Block

Mill ise co nd s

CM 02468

10

12

14

16

18

20

22

24

26

28

30 123456789101112 Block

Nu mb er o f c om pa ris on s

RT pos RT neg SD pos SD neg VM 024681012141618202224262830 123456789101112 Block

Nu mb er of co mp ari so ns

Learning rates are equalsLearning rates are also equals

(13)

M ea n R ea ct io n T im e

•In CM condition, a limited number of features discriminates all targets from distractors. There is no overload of the attentional windows •In VM condition, the number of features needed remains large. The attentional windows must swap from character to character

(14)

200300400500600700800900100011001200 1234 D size

mea n R T

0%10%20%30%40%50%60%70%80%90%100%

Percent Error

200

300400

500

600

700

800900

1000

1100

1200 1234 D size

mea n R T

0%

10%20%

30%

40%

50%

60%70%

80%

90%

100%

Percent Error

010

20

30

40

50

60

70 1234 D size

N o f f ea tu re s

010

20

30

4050

6070 1234 D size

N o f f ea tu re s

(15)

200300400500600

700

8009001000

11001200 1234 D size

mea n R T

0%10%20%30%40%

50%

60%70%80%

90%100%

Percent Error

200300

400500

600700

800900

100011001200 1234 D size

mea n R T

0%10%

20%30%

40%50%

60%70%

80%90%100%

Percent Error

010

20

3040

50

60

70 1234 D size

N o f f ea tu re

0 s

10

20

30

40

50

60

70 1234 D size

N o f f ea tu re s

(16)

S ta n d ar d D ev ia ti o n

In the SFM: Negative trials: To be exhaustive on character locations means to be self-terminating on features Positive trials: To be self-terminating on character locations means to be exhaustive on features located above diagnostic threshold Therefore, variance on positive and negative trials should behave in a similar fashion. It is opposed to a strict serial model, where negative trials should have a smaller variance

(17)

010

20

30

40

50

60

70 1234 D size

Sta nd ard D evi atio n

010

20

30

40

50

60

70 1234 D size

Sta nd ard D evi atio n

050100150200250300350

400450500 1234 D size

Sta nd ard D evi ati on

050100150200250300350400450500 1234 D size

Sta nd ard D ev iat io n

(18)

050100150200250300350400450500 1234 D size

Sta nd ard D ev iat io n

050100150200250300350400450500 1234 D size

Sta nd ard D evi ati on

010

20

30

40

50

60

70 1234 D size

Sta nd ard D ev iat io n

010

20

30

40

50

60

70 1234 D size

Sta nd ard D ev iat io n

(19)

S u m m ar y o f th e R es u lt s

A decision process based on reduction of information assumptions reproduces very well the means and standard deviation for both CM and VM conditions. This is done by: evaluating diagnosticity on a trial by trial basis and by testing for useless features, based on the % of error allowed.

(20)

A) Categorical Manipulation In a CM conditions, targets are always the same. They can be seen as a category. Do pre-established categories alter performance?

4 - F u rt h er R es u lt s

(21)

We tested CM conditions with targets and distractors being of the same category of characters (HOMO condition) or of distinct categories (HETERO condition). Example of setstargets / distractors HOMOL, H, R, S / Z, B, G, F vs HETERO2, 3, 6, 7 / Z, B, G, F

(22)

CM HETERO group achieve asymptote on the second session of learning. They show no load effect on the first block CM HOMO never equals CM HETERO with more than 5000 trials of practice. CM HETERO 0

100

200

300

400

500

600

700

800

900

1000 123456789101112 Block

CM HOMO 0

100

200

300

400

500

600

700

800

900

1000 123456789101112 Block

Mil lis ec on ds

(23)

With CM learning, a generic representation (GR) composed of diagnostic features of a category might become easier to retrieve than representation of a single target (T). This raises the possibility that, after such an association is learned, it might be possible to performed well under Inconsistent Mapping (IM) conditions. One type of IM is seen next.

TGRTest ProbeDecision

(24)

B) Categorical Varied Mapping (CVM) conditions If GR is easy to retrieve, it might be possible for subjects to alternate between two distinct and mutually exclusive sets of targets and distractors Example of setstargets / distractors HOMOL, H, R, S / Z, B, G, F and Z, B, G, F / L, H, R, S HETERO2, 3, 6, 7 / Z, B, G, F and Z, B, G, F / 2, 3, 6, 7

(25)

•CVM HETERO results are comparable to CM results (learning curve and load effect) •CVM HOMO results show learning curve similar to VM groups and large load effect. CVM HOMO 0

100

200

300

400

500

600

700

800

900

1000 123456789101112 Block

Mil lis ec on ds

CVM HETERO 0

100

200

300

400

500

600

700

800

900

1000 123456789101112 Block

Mi lli se co nd s

(26)

010

20

30

40

50

60

70 1234 D size

N o f f ea tu re s

01020

30

40

5060

70 1234 D size

N o f f ea tu re

Model was given two feature s tests lists to learn from, one for letters, one for digits 1200 1100 1000 900 800 700 600 500 400 300 200 1234 D size

mea n R T

0%10%20%30%40%50%60%70%80%90%100%

Percent Error

200300400500600700800900100011001200 1234 D size

mea n R T

0%10%20%30%40%50%60%70%80%90%100%

Percent Error

(27)

Im p li ca ti o n o f th e S F M t o O th er T h eo ri es

A) Category knowledge: With a sensitivity to sets presented, the model can learn in the inconsistent CVM HETERO condition This is against any direct-access theory of performance (Logan’s Instance-based theory of automaticity for instance)

(28)

B) Non-atomic representation: Physical similarity effects (such as Distractor- Distractor homogeneity effect and Target- Distractor heterogeneity effect) are easily explained by the SFM. Furthermore: With feature representation, standard deviation are well-described.

(29)

C) Reduction of information: It captures the essential aspects of CM and VM learning (load effect in both M and SD, some aspects of the learning curves)

(30)

The CM Character and VM Feature task (Work in progress with C. Lefebvre) Since characters have two levels of representation (holistic, featural), inconsistencies in mapping is possible:targets / distractors Will performance be more like VM or CM?

p d h u / b q n u v s p d b q / h u n u

5 - F u tu re D ir ec ti o n o f R es ea rc h : A n a p ri o ri t es t o f th e S F M

(31)

Thank you for your attention.

(32)

The GR can be seen as a decision tree where features are nodes, and tests are vertices. 11, 5, 2, 1, 6, 9, 7, 3, 8, 4, 10, 12

+ -

diagnosticity Useless features

11? 2?5? 1?

PresentAbsent Present « YES » (L, H, S) PresentAbsent « YES » (R)

« No »

(33)

Treisman reports linear variance; Ward and McClelland report linear STD; Cousineau and Larochelle find in- between results. SFM accepts a broad range of value

) ( ) ( ) ( ) ( ) ,1 (

2

T m T m E Var Var E n Var + =

Number of feature m

Va ria nce

m2 : Ward and McClelland SFM m: Triesman

(34)

S o m e D im en si o n s in M o d el in g th e M em o ry /V is u al S ea rc h T as k

•Direct Access (e.g. memory-based) vs Indirect Access (e.g. GR) •Single Step(e.g. retrieval of the resp.) vs Multiple Step (scanning of the display) •Atomic Representation (features) vs Holistic Representation (templates)

Références

Documents relatifs

Numerical simulations based on the Quantum Monte Carlo method (QMC) during the recent years mainly provided evidence against superconduc- tivity for these models except

Our research showed that an estimated quarter of a million people in Glasgow (84 000 homes) and a very conservative estimate of 10 million people nation- wide, shared our

Consider Streeck’s claim that soaring public indebtedness following the Great Recession “reflected the fact that no democratic state dared to impose on its society another

The IRM4S model (Influence Reaction Model for Simula- tion) is an adaptation of the formalism of [2] for multi-agent based simulations (MABS). The goal of IRM4S is to provide a

Suite à votre demande d’inscription aux cours de la Société Française de Morphopsychologie à Lyon, vous trouverez ci-dessous le dossier d’inscription pour l’année 2017/

Suite à votre demande d’inscription aux cours de la Société Française de Morphopsychologie à Mons (Belgique) vous trouverez ci-dessous le dossier d’inscription pour

La SFM est le premier organisme et le seul habilité à proposer l’enseignement de la Morphopsychologie dans le monde selon la méthode du docteur Louis Corman.. Nous assurons plus

La Société Française de Morphopsychologie (SFM) est le premier organisme à proposer l’enseignement de la Morphopsychologie, dans le monde, selon la méthode de morphopsychologie