• Aucun résultat trouvé

Deviations from research traditions

MEMORY: HYPOTHESIS AND PREDICTIONS

4.1. Deviations from research traditions

Existing studies on feature associations have for the larger part been performed within the visuo-spatial domain. Typical study items concern the associations between color and shape (e.g., a green triangle; e.g., Allen et al., 2006), orientation and color (e.g., a blue horizontal rectangle; e.g., Luck & Vogel, 1997) or color and location (Wheeler & Treisman, 2002). However, as we have noted earlier, this specific instance of associations might not be the best fitted to attain generalizable results. Keeping in mind the multi-component model, a dissociation was made between a visuo-spatial and a verbal maintenance buffer. Later on, the episodic buffer was added as a structure capable of maintaining associations of features from any domain. Despite this multi-domain and multi-modal conception of the episodic buffer, the exploration of its characteristics (mainly in terms of attentional needs) was executed on visuo-spatial within-domain associations. The results of this exploration did however not fit with the assumptions made about the episodic buffer. Two different conclusions could hence be

drawn: either the characteristics of the episodic buffer had to be revised, either the creation and/or maintenance of visuo-spatial feature associations is not achieved in the episodic buffer but in the visuo-spatial maintenance buffer. This latter option would thus imply the

maintenance of (visuo-spatial) within-domain feature associations to be an exceptional case of feature associations. As the main goal of this research project is to come up with general conclusions about the maintenance of feature associations, we have avoided this kind of trap

89

by investigating cross-domain feature associations. Focusing on cross-domain associations should allow us to unveil the commonalities of the maintenance of feature associations.

A second deviation from most studies on the maintenance of feature associations concerns the output procedure to evaluate memory performance. The change detection paradigm has become the standard paradigm to evaluate the maintenance of feature associations. Typically, participants are presented with three or four associations to be

maintained. After a retention delay ranging from 900 ms to several seconds, either one (single probe test) or three/four test items (whole display test) are presented for which the participant has to decide whether this one or these three/four items have been seen at study. This

procedure includes however several drawbacks. First of all, Allen et al. (2006) showed that participants could make use of certain strategies within the change detection paradigm that could inflate their performance scores in the case of feature associations. They used the change detection paradigm with a single test probe. In the first two studies, they replicated the study by Wheeler and Treisman (2002) in which features were never repeated in a study array. Each feature could thus appear only once (see Figure 4.1). However, Allen et al.

reasoned that if for example participants remembered that there was an orange cross in the study array and the probe was an orange triangle, participants could reject the orange triangle based on the fact that no color could be presented twice. So even if the participant

remembered only one item from the study array, he or she could make a correct response in some cases. This strategy does however only apply to the maintenance of feature associations.

In case of single feature maintenance, this strategy does not allow to increase performance.

Allen et al. neutralized this strategy by inserting the repetition of certain features in about one fourth of the trials. For example, the color orange could be presented twice in the study array, e.g. an orange cross and an orange triangle (see Figure 4.1). If at test an orange triangle was presented, this probe could not be rejected on the basis that one was sure that there was an orange cross. This procedure was implemented by Allen et al. in two experiments. A comparison with two former experiments in which no repetitions were allowed showed that the no-repetition method had indeed inflated the recognition scores in these experiments. An equal performance between the recognition of single shapes and associated features was found in the first two experiments, while the following two experiments clearly showed a less accurate performance for the recognition of associated features than for the single shapes.

This difference could not be attributed to a lower score on the trials on which a repetition

90

occurred as these had been excluded from analysis5. Subsequent studies from these same authors continued to apply this method (Allen et al., 2009; Karlsen et al., 2010), but it has not yet become a standard procedure. Other studies continued thus to use the non-repetition method, possibly leading to an erroneous recognition estimate for feature associations (e.g., Brown & Brockmole, 2010; Fougnie & Marois, 2009; Johnson et al., 2008).

Figure 4.1: Example of a study array with feature repetition allowed or not.

A second weakness of the change detection paradigm is its difficulty to derive capacity measures from it. We have already mentioned in chapter three a number of studies that have confounded Pashler’s (1988) and Cowan’s (2001) formulas to estimate this

capacity. In addition, Rouder et al. (2011) elaborated on the influence of the set size to obtain valid capacity estimates. A large number of studies use study arrays composed of four items.

Imagine now that the true capacity k of a population corresponds to four items. This estimated capacity of four items is an average and some persons may thus have a capacity limit of three while others have a capacity limit of five. For those participants with a capacity limit of five items, and a study array of four items, their capacity can never be estimated higher than four items. So although the population capacity limit is four, a detection paradigm making use of study arrays containing only four items will in any case bring this average down due to an incorrect capacity estimate for those persons with a capacity limit of five items. Nevertheless, study arrays of four items are rather a standard within research on feature associations.

In addition to these criticisms expressed by Rouder et al., one has to be aware that the estimation of the capacity measure k is based on assumptions. However, these assumptions are not always warranted. For example, Cowan et al. (2013) created different capacity estimates k based on the location of the probe: central or in-target. They reasoned that a central probe has to be compared to each of the study items maintained in memory. An

5Allen et al. (2006) inserted repetition trials in the single feature and feature association condition. In the single feature condition showing only colors or shapes, this concerned hence the replication of a color or a shape. In the feature association condition, this could concern the repetition of a color, of a shape or the

replication of an association of a color with a shape. As a result, the number of different objects was not the same in the single feature and feature association condition, resulting in different guessing probabilities. To avoid this problem, repetition trials were not used in the calculation of the corrected recognition scores. Nevertheless, the insertion of repetition trials neutralized the possible use of the strategy described above.

91

target probe should only be compared to the item maintained for that position. These assumptions have implications for the guessing rates, and alter as such the formula of the capacity measure. The first experiment in the study by Cowan et al. (2013) suggested

however that participants do not make use of the location information coupled to the in-target probe and compare this probe anyhow to all study items. The proposed formula for k in case of an in-target probe was thus invalidated as the guessing probabilities did not correspond to participants actual guessing. So, next to the pitfall of an incorrect use of the capacity measure k as a function of a whole array or a single probe test, the guessing probabilities have to be carefully verified before applying any of these formulas.

Considering these disadvantages, we believe that the use of the change detection paradigm might thus easily result in an erroneous estimate of participants’ performance. An easy solution to surpass these disadvantages is to make use of a recall paradigm. In addition, the recall paradigm presents an advantage that should not be neglected. Gajewski and

Brockmole (2006) as well as Ueno, Mate, et al. (2011) noted that the change detection paradigm does not allow to determine what is actually maintained in working memory. The paradigm can detect whether an item maintained in memory matches a probe, but in case of a mismatch it is impossible to determine the exact nature of the representation one has in memory. Consequently, both studies prioritized a recall paradigm, which does allow

determining the actual representations held in working memory. We have hence opted for a recall instead of a recognition paradigm in this research project.

The maintenance of cross-domain feature associations measured by recall was

implemented within complex span (Daneman & Carpenter, 1980) as well as Brown-Peterson tasks (Brown, 1958; Peterson & Peterson, 1959). These tasks are widely used within working memory research (e.g., Jarrold, Tam, Baddeley, & Harvey, 2011; Tehan, Hendry, & Kocinski, 2001) and make evident the double function of working memory: maintenance and

processing. In the Brown-Peterson task, all items to be maintained are presented first,

followed by a processing task (see Figure 4.2, panel a). In the complex span task, each item to be maintained is followed by a processing phase (see Figure 4.2, panel b). For both tasks, the number of items to be remembered was systematically increased, allowing us to infer a span score as the measure of memory performance. The cognitive load (i.e., the attentional

demand) of the processing task was also manipulated systematically (see also Figure 4.2). By increasing the attentional demand of the processing task, we decreased general attentional availability. Manipulating attentional availability through a manipulation of the cognitive load

92

is novel in the maintenance of feature associations. Previous studies manipulating attentional availability have rather implemented this in an all or nothing way: either a processing task was present, either it wasn’t. However, as stated by Vergauwe, Langerock, et al. (2014), the presence or absence of a processing task may consist in more than just a difference in attentional availability. The creation of interference, response conflict or the need to coordinate a dual task were suggested as possible confounding factors. In order to avoid confusion, we adapted a methodology allowing the manipulation of attentional availability by manipulating the cognitive load of the processing task.

Figure 4.2: General design of the upcoming experiments according to a) Brown-Peterson design and b) complex span design, with a manipulation of the cognitive load. In this example, two cross-domain associations are to be maintained.

A last deviation that would not have passed unnoticed to the experienced reader is the

retention interval. The standard retention interval for the maintenance of feature associations seems to have been set to 900 ms (e.g., Allen et al., 2006; Johnson et al., 2008; Luck &

Vogel, 1997; Wheeler & Treisman, 2002). Apart from temporal limitations for the duration of the experiment, we do not see why this same retention interval has systematically been used.

The focus of the present research project was on the maintenance of feature associations. A retention interval of 900 ms does to our opinion not leave enough possibilities to decently explore this maintenance process in all of its facets. This is why we have opted for prolonged maintenance intervals. In our use of the Brown-Peterson task, this retention interval was increased to 12000 ms. In the complex span tasks, retention intervals varied as a function of the number of items to be maintained and cognitive load. Retention intervals were alternated with the presentation of memory items. Hence, when four items were to be maintained, the total of the retention intervals was twice as long as when only two items had to be maintained.

Additionally, a specific manipulation of the cognitive load was performed by reducing the time allowed to treat the same number of processing items (see Figure 4.2, panel b, medium

93

and low cognitive load). The retention intervals in our studies ranged from 4000 ms minimum up to 65000 ms (seven items to maintain in a complex span task, low or high cognitive load).

These prolonged retention intervals should hence allow a thorough investigation of the characteristics of this maintenance process.

We have explained the fundamental changes we have implemented in the research paradigm we have used, as well as presented the general structure of the upcoming

experiments. In the following sections, we will specify for each of our research goals, the main hypotheses, their experimental implementation and their accompanying predictions.