• Aucun résultat trouvé

False Words Seem True

N/A
N/A
Protected

Academic year: 2021

Partager "False Words Seem True"

Copied!
244
0
0

Texte intégral

(1)

Doctoral Thesis

False Words Seem True

The Power of Truth Bias in shaping Memory and Judgment

Author:

Myrto Pantazi

Supervisors: Prof. Dr. Olivier Klein Prof. Dr. Mikhail Kissine

Committee Members: Dr. Fabienne Chetail Prof. Dr. Klaus Fiedler

Dr. Christophe Leys Dr. Ira Noveck

Dissertation préperée sous la direction des Professeurs Olivier Klein et Mikhail Kissine en vue de l’obtention du titre de Docteur en Sciences Psychologiques et de l’Éducation

Center for Social and Cultural Psychology, Centre of Research in Linguistics

(2)
(3)

Abstract

(4)
(5)

Acknowledgements

I would like to express my gratitude to my two advisors, Olivier Klein, and Mikhail Kissine, for their precious and continuous guidance during the past four years. Their combined forces in supporting me and assisting me at all moments, as well as well as our stimulating discussions, have made the realization of this thesis a wonderful academic journey. Mikhail et Olivier, Olivier et Mikhail, I feel extremely lucky to have worked “under your auspices”. Thank you!

I would also like to thank all the members of the Center of Social and Cultural Psychology unit, as well as of the Center of Research in Linguistics for providing such a warm, stimulating, cooperative and supportive environment. Their comments and ideas have been of great help at many points, while working in the ambient coziness of both research units is priceless.

Besides, I am grateful to the members of my “accompanying committee”, Philippe de Brabanter, Wim Gevers and Christophe Leys for all their insightful suggestions during our constructive meetings. On more technical aspects, I thank the SONA system team (Nico and Vinciane) for abundantly offering their time and help any time I needed them. A special thanks goes to Luce Vercamen, the secretary of the CeSCuP, for her readiness to assist me with any issue bureaucratic or not, related to this dissertation.

Katia Kissine, and Eric Breton-le-Veel “offered their voices” to the material used in almost all of the present studies, while my childhood friend, Elina Foteinou is responsible for the creation of the video used in Study 1 of Chapter 1. I am extremely grateful to all three of them for their time and substantial help. My gratitude also goes to Julie Allard and Patrick Mandoux for their precious advice on Study 3 of Chapter 3. Patrick Mandoux also immensely contributed to the recruitment of the participants in this Study. Without his assistance this study could not have been realized.

(6)
(7)

Contents

Abstract iii

Acknowledgements v

1 Introduction 1

1.1 General Introduction . . . 2

1.2 The truth bias . . . 3

1.2.1 Belief . . . 6

1.3 Evidence for vigilance . . . 7

1.3.1 Evolutionary arguments . . . 7

1.3.2 Ontogenesis of vigilance . . . 11

1.3.3 Vigilance in adults . . . 13

1.3.4 Interim summary . . . 15

1.4 The truth bias . . . 15

1.4.1 Belief and Language . . . 16

Some contemplation about evolution . . . 18

1.4.2 Experimental evidence for the DPM . . . 22

1.4.3 Interim summary . . . 29

1.5 Some notes on the truth bias and how to measure it . . . 29

1.5.1 Linguistic constraints. . . 29

(8)

2 An Experimental Investigation of the Truth Bias 39

2.1 Introduction . . . 39

2.1.1 Our studies . . . 40

2.2 Study 1 . . . 42

2.3 Method and Material . . . 45

The ostensible crime reports . . . 45

The measures . . . 46

Procedure . . . 49

Participants . . . 50

2.3.1 Analyses . . . 52

Some methodological notes . . . 52

Judgments . . . 54 Memory . . . 56 The relation . . . 59 Distraction Task . . . 60 2.3.2 Discussion . . . 60 2.4 Study 2 . . . 64

2.4.1 Method and Material . . . 65

The distraction task . . . 66

2.4.2 Participants and Procedure . . . 67

2.4.3 Analyses . . . 69

Judgments . . . 69

2.4.4 Memory . . . 71

Relation between judgments and memory. . . 76

The role of distraction. . . 76

2.4.5 Discussion . . . 77

2.5 Study 3 . . . 80

(9)

Procedure & Participants . . . 81 2.5.2 Analyses . . . 82 Memory . . . 82 Judgments . . . 86 2.5.3 Discussion . . . 87 2.6 Study 4 . . . 88

Design & Material . . . 89

Participants & Procedure . . . 89

2.6.1 Analyses . . . 90

Memory . . . 90

Classification reaction times . . . 94

Judgments . . . 95

2.6.2 Discussion . . . 95

2.7 Study 5 . . . 98

Materials & Method . . . 98

Participants . . . 99 2.7.1 Results . . . 99 Memory . . . 99 Reaction Times . . . 102 Judgments . . . 104 2.7.2 Correlation . . . 104 2.8 General Discussion . . . 106

3 The Truth Bias and Vigilance 111 3.1 Introduction . . . 111

3.2 Study 1 . . . 115

3.2.1 Method and Material . . . 116

3.2.2 Participants and Procedure . . . 119

(10)

Judgments . . . 120

Memory . . . 121

Correlation . . . 127

3.2.4 Discussion . . . 127

3.3 Study 2 . . . 130

3.3.1 Design & Material . . . 130

3.3.2 Participants & Procedure . . . 131

3.3.3 Results . . . 132 Judgments . . . 132 Memory . . . 132 Correlations . . . 135 3.3.4 Discussion . . . 135 3.4 Study 3 . . . 136

3.4.1 Method & Material . . . 137

3.4.2 Participants & Procedure . . . 138

3.4.3 Results . . . 139 Judgments . . . 139 Memory . . . 142 Correlations . . . 144 3.4.4 Discussion . . . 145 3.5 Study 4 . . . 146

3.5.1 Material and Method . . . 148

3.5.2 Participants & Procedure . . . 149

3.5.3 Results . . . 150

Judgments . . . 150

Memory . . . 152

Correlation . . . 154

(11)

3.6 General Discussion . . . 156

4 Coda 159 4.1 The truth bias summarized . . . 161

4.2 The Direct Perception Model and the meta-cognitive vigilance . . . 168

4.3 Open questions and future avenues . . . 179

4.3.1 The need of multiple measures to scrutinize the truth bias . . . 179

4.3.2 Directly testing the DPM . . . 182

4.3.3 Vigilance . . . 184

4.3.4 Moving the set . . . 187

4.3.5 Final words . . . 189

A The Reports 191

B The Statements. 199

(12)
(13)

List of Figures

2.1 Screen caption of the task in Study 1 . . . 51 2.2 Mean judgments for the aggravated and attenuated perpetrators for the

dis-tracted and undisdis-tracted group in Study 1. . . 55 2.3 Identification pattern for the green and red statements for the distracted and

undistracted group in Study 1. . . 58 2.4 Identification pattern for the aggravating, attenuating, and neutral new

state-ments for the distracted and undistracted group in Study 1. . . 61 2.5 Mean judgments for the aggravated and attenuated perpetrators for the

dis-tracted and undisdis-tracted group in Study 2. . . 70 2.6 Mean judgments for the aggravated and attenuated perpetrators per speaker in

Study 2. . . 71 2.7 Identification pattern for the true and false statements for the distracted and

undistracted group in Study 2. . . 72 2.8 Identification pattern for the aggravating, attenuating, and neutral new

state-ments for the distracted and undistracted group in Study 2. . . 76 2.9 Identification pattern for the true and false statements in Study 3. . . 83 2.10 Identification pattern for the aggravating, attenuating, and neutral new

(14)

2.14 Mean reaction time for the identification pattern of the true and false statements in Study 4. . . 94 2.15 Mean reaction time for the identification pattern of the aggravating, attenuating

and neutral new statements in Study 4. Error bars represent 95% CIs. . . 96 2.16 Identification pattern for the true and false statements in Study 5. . . 100 2.17 Identification pattern for the new statements in Study 5 . . . 102 2.18 Mean reaction times for the true and false statements per identification type in

Study 5. . . 103 2.19 Mean reaction times for the aggravating, attenuating and neutral new

state-ments per identification type in Study 5. . . 104 2.20 Mean judgments for the aggravated and attenuated perpetrators. . . 105 3.1 Mean judgments for the aggravated and attenuated perpetrators per group in

Study 1. . . 121 3.2 Identification pattern for the true and false statements per group in Study 1. . 124 3.3 Identification pattern for the true and false statements in the control group per

report version in Study 1. . . 126 3.4 Mean judgments for the aggravated and attenuated perpetrators per group in

Study 2. . . 133

3.5 Identification pattern for the true and false statements per group in Study 2. . 134 3.6 Mean judgements for each report, separately for the judges and a

student-control group. . . 140 3.7 Identification pattern for the true and false statements per group. . . 143 3.8 Mean reaction time for the identification pattern of the true and false statements

in Study 3. . . 144 3.9 Mean judgements for each report, separately for the incentives and base-rate

group in Study 4. . . 151 3.10 Mean reaction time in the judgements for each report, separately for the

(15)

3.11 Identification pattern for the true and false statements per group. . . 153 3.12 Mean reaction time for the identification pattern of the true and false statements

(16)
(17)

List of Tables

2.1 Mean percentage responses per identification type, as a function of report and nature of the statements in Study 2. . . 74 2.2 Mean percentage responses per identification type, as a function of statement

type of statement and speaker in Study 2. . . 74 2.3 Words in the true and false statements per report version in Study 3 . . . 81 2.4 Mean percentage responses per identification type, as a function of report and

nature of the statements in Study 3. . . 84 2.5 Mean percentage responses per identification type, as a function of statement

type of statement and speaker in Study 3. . . 84 2.6 Mean percentage responses per identification type, as a function of report and

nature of the statements in Study 5. . . 101 2.7 Mean percentage responses per identification type, as a function of statement

type of statement and speaker in Study 5. . . 101 B.1 The statements and their status in the memory test for the two versions of the

report of Dimitri. . . 200 B.2 The statements and their status in the memory test for the two versions of the

report of Etienne. . . 201 C.1 Randomization of the true and false statements across the presentation lists in

(18)
(19)

Chapter 1

Introduction

There are some striking coincidences in life. By the time I’m concluding my Ph.D, a large-scale, natural experiment, set up not by me or any other scientist but rather by history itself, is unfolding. It is much bigger than any experiment I would ever be able to conduct, tested on the biggest sample that a psychologist could wish for. The striking coincidence is that this experiment already seems to corroborate the claim I make in this thesis: that people tend to believe statements they hear –even if they know them to be false.

The experiment I am talking about is the US presidential election on November, 8, 2016. In case you are wondering what the US election has to do with a thesis assessing people’s ten-dency to believe false information, well, it happens that one of the two nominees in this election

probably holds the politicians’1 world record of inaccurate statement production. Now, you

may have guessed that this nominee is Donald Trump. But what you may not know is that ac-cording to Politifact, a fact-checking organization established to verify politicians’ statements, 71% of Trump’s statements are rated as at least mostly false – the “at most” end of the scale being pants on fire! Trump’s untrustworthiness has been, for obvious reasons, emphasized in the Democratic electoral campaign, and highlighted by many political commentators. In spite of this, Trump managed to become the nominee of the Republican party, and a sizeable portion of the US electorate (even if not the majority) supports his candidacy. This very fact already suggests that people strongly tend to believe statements they hear and read, even if these

1I am referring here to politicians of the WEIRD (Western, Industrialized, Rich, Democratic) countries,

(20)

statements come from a largely untrustworthy person and, many times, they are blatantly false. Of course nobody expects that voters frenetically assesses the veracity or consistency of the statements made by politicians. I do not claim, either, that those voting for Trump do so more because they conscienciously believe his claims, than due to motivational and emotional reasons, such as political or social identity. Yet, the Trump phenomenon refutes, in a dramatic manner, scholars claiming that people are primarily vigilant communicators and inherently take into account the source and accuracy of statements they endorse. The Trump phenomenon signifies a pervasive tendency to believe things we hear and read, and, in any case, a generalized indifference for the accuracy and consistency of the information that a public figure can emit. For, Trump voters either fail to recognize that he is a blatant liar, or they care very little for his accuracy.

1.1 General Introduction

(21)

1.2 The truth bias

Linguistic communication is one of our main means of acquiring information about the out-side world. From close, personal exchanges with family members and friends, to professional conversations, and from (the masses of) information we receive from the media, to that we sometimes receive from strangers, in a bus or a shop, information exchange through language is ubiquitous. Sometimes, the information we receive in such contexts, may be trivia and inconsequential such as in the case of small talk. But often, we are based on what other people tell us, in the way we think and act. In view of this immense role that information exchange plays in our lives it is important to answer to a crucial question: are we in a position to accurately assess the statements we hear and read? Or do we tend to believe them?

Naturally, as you may be thinking, these questions take on a different light depending on who makes the statements, that is, the source of the information. We are almost certain that people close to us, unless in a playful mood, will provide us with accurate information as far as this depends on them. In principle, it is also okay to rely on information we receive from others, even if we barely know them. Bronner (2013) argues in a convincing way that trust is the sine-qua-non of societies, which would be unimaginable without at least some degree of communal trust. He characteristically points out that in all public spheres we trust that others will behave in the way they are expected to: the post-officer will post the letter for which we paid, the shop assistant will not run away with our money in hand without rendering us the goods we paid for, other drivers (roughly) respect the traffic signs, the way we do. Extrapolating a little, it is reasonable to expect that, in general, other people will provide us with truthful information, as they are expected to.

(22)

Pinker & Jackendoff, 2005), and has eventually rendered information exchange much more pervasive in humans compared to other animals (Boesch & Tomasello, 1998; Clément, 2010). Humans have, thus, ended up acquiring a huge amount of information through testimony from others (Bergstrom, Moehlmann, & Boyer, 2006; Burge, 1993; Coady, 1994; Goldman, 1999; Weiner, 2003). Under this perspective the truth-bias looks like a reasonable response to incoming linguistic information.

Although such a tendency may have evolutionary, adaptive roots, it may become disad-vantageous when encountered with information that is inaccurate or false. Especially in an online age, characterized by the massive usage of the Internet and the mass media, misinfor-mation seems at its peak. Only in 2010, as much informisinfor-mation was produced as it had ever been produced in human history (Bronner, 2013)! In such a context, we are bound to form beliefs, attitudes and opinions much more often based on media information than on our own experiences, or face-to-face exchanges with acquaintances. Besides, given this immensity of existing information nowadays, the portion of existing knowledge one can possibly master de-creases consistently. Thus, today, more than ever, we cannot but “believe through delegates” (Bronner, 2013), that is, be largely based on information we acquire from others (Keeley, 1999).

(23)

2016). Of course, generally speaking, scientists are entitled to trust work published in their field, as scientific journals seem to adopt stricter and stricter criteria for research publication. Yet, those few erroneous results that may happen to sneak in the scientific literature, have admittedly nefarious results. Andrew Wakefield, who published an article falsely claiming that the MMR vaccine causes autism spectrum disorders in children has brought about a dramatic decline in MMR vaccination, accompanied by a dramatic increase of measles cases (Jolley & Douglas, 2014).

Despite their potentially nefarious effects, such cases in science are relatively rare. One of the biggest sources of misinformation today, either in the form of clear distortion of facts, or simply in the form of specific framing of facts with the intention of biasing judgments and opinions, is the Internet. With the advent of blogs, youtube, and social media almost any-one can have their share in the production of information. Unfortunately, this expany-onential increase in the information produced is accompanied by a reduction in information quality,

as the sources of events and facts claimed on the net are very often opaque.2 In his

illustra-tive book “La démocracie des crédules” Gérald Bronner describes how the democratization of media, implemented by the massive use of Internet, poses a serious threat to the quality of information that circulates. The Trump case, although probably (and hopefully) not a rep-resentative example in politics, definitely suggests that much of the information “out there” is inaccurate. That this is true of a candidate president, a personality that is presumingly accountable to society, is indicative of the potential information accuracy of anonymous blog-gers and youtubers. Many of those people, lacking the many collaborators that politicians of the “magnitude” of Trump have for constantly designing their public announcements, and thoroughly check the accuracy of the information they provide, are unable and often unwilling

to check the accuracy of the information they disseminate.3 The importance of this relatively

new phenomenon can be visible in the increasing emergence of fact-checking organizations,

2When the term quality refers to information, it pertains to matters of truthfulness and accuracy.

3Of course one could argue that many “anonymous” bloggers and individuals on the net may be intentionally

(24)

specialized in checking information that is disseminated in the media, and invalidating poten-tial misinformation. The European Union seeks to develop policies targeted at reducing the impact of misinformation and propaganda, which also consists an active domain of research in social psychology (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012).

In such a context, if we are, in general, prone to believing statements we hear and read, we will be prone to believe a substantial part of the mass of misinformation that circulates daily. Whether unintentionally provided by people who do not appropriately check the validity of the information they disseminate on the web, or intentionally in the form of advertisement, framing and propaganda, misinformation is quite common. The potential power of the truth bias and its effects is, thus, a timely topic, and of interest for scholars as well as the general public.

1.2.1 Belief

The question of the definition of belief is as old as western philosophy. It can be traced back to Plato and has traversed all philosophical theorizing up to date (see Bogdan, 1986 for a lively review). Despite its obvious importance for humanity, and against its apparent relevance for our question of interest, I will not elaborate on the definition of belief. For, the concept of belief for our purpose is restricted by our research question itself. In the end, the reason why the truth bias is an interesting topic lies in the consequences that this bias may have on us as individuals, and society as whole. Whenever I use the term “belief”, thus, in the present context I always consider it relative to linguistic stimuli, and its relevant repercussions.

(25)

belief is a distinctive feeling of “force and vivacity” discriminating the “ideas that are affirmed from those that are merely entertained” (Smith, 1941, p. 41).

While Hume’s definition addresses the individual level of belief we are concerned with, it ignores the second, functionalist aspect of belief that interests us here: the consequences that believed statements may have. In assessing whether there is a truth bias in linguistic communication, it is not only important to study how we feel towards statements’ truth-value, but also to take into account the consequences of this feeling for our thinking and actions. Thus, I will make an amendment to Hume’s definition above and, for our purposes, define belief as a “distinctive feeling of force and vivacity” about statements we encounter, “that significantly affects our behavior”. We will come back to this definition at the end of this chapter when we will have to decide how the truth-bias should better be studied.

1.3 Evidence for vigilance

1.3.1 Evolutionary arguments

The idea that people are characterized by a truth-bias, a pervasive intrinsic tendency to believe

what they hear and read (I call this the “gullible view”) 4 is far from uncontroversial. There

are many psychologists, anthropologists and evolutionary scientists, who argue that because of the role of language as a massive information transmitter, humans must have developed a suite of cognitive mechanisms, or modules, aimed at the protection from misinformation and cheating (Bergstrom et al., 2006; Clément, Koenig, & Harris, 2004; Cosmides & Tooby, 1992; Schul, Mayo, & Burnstein, 2004; Sperber et al., 2010).

From an evolutionary perspective, the argument is that verbal communication can only be perpetuated if it is advantageous both for the speakers and for the addressees, in the long run (i.e. regardless of whether both the speaker and the addressee benefit from a specific exchange

4There may be some objections to the usage of the term gullibility due to its negative connotation, especially

(26)

they have; Sperber, 2001). According to such a view of verbal communication as a positive-sum gain, the speaker is always better off if the addressee believes his message, regardless of whether it is true or false. This is because a speaker gives the kind of information that will have the intended results he wishes to have on an addressee. For the addressee, if the speaker is truthful it is better to believe than to disbelieve his message; if the speaker is untruthful, it is better to disbelieve than to believe it. As Sperber (2001) points out, the dilemma is intractable, and has to be specifically addressed within the specific context of each exchange, that is, taking into account the identity of the speaker, his intentions, the stakes for the hearer etc. Much of the evolutionary speculation on such grounds, ends up ad hoc assuming that if communication is there, humans must have developed cognitive mechanisms that keep them protected from speakers who are untruthful, intentionally or not (Clément, 2010; Cosmides & Tooby, 1992; Mascaro & Sperber, 2009; Sperber, 2001; Sperber et al., 2010).

Although this evolutionarily motivated “vigilant view” may at first seem intuitively accu-rate, I take it to be normative and too rationalistic. Let me consider the two characterizations in order. Arguing that humans are capable of protecting themselves from misinformation be-cause they ought to hold true beliefs, is, to my eyes normative and circular. As Cosmides and Tooby (1992) remark,

Adaptations can be recognized by “evidence of special design”, that is, by recog-nizing that features of the evolved species-typical design of an organism are “com-ponents of some special problem-solving machinery” that solves an evolutionarily long-standing problem. (p. 165)

My reading of Cosmides and Tooby (1992) is that we can infer an existing mechanism is the result of evolutionary adaptation, if it seems well-suited for the function it accomplishes. This is totally different from assuming that a mechanism exists because it purportedly has a function that would be useful for humans. In other words, claiming the existence of a mechanism because it would be good for us to have it is not sound evolutionary reasoning.

(27)

presupposes that we only hold true beliefs, as it is expected of an evolved cognitive system (see Sperber, 2001). Such a rationalistic view resonates reductionist epistemological views of knowledge. Testimonial knowledge, the knowledge that we acquire through what other people report, has concerned philosophers and epistemologists with respect to the conditions under which this knowledge is solid and, thus true. According to the so-called “reductionist” trend in epistemology (Clément, 2010; Fricker, 2006; Lackey, 2007; Sperber et al., 2010) we are entitled to accept what others tell us, only if we have reasons independent of the message to believe it (Fricker, 1987; Hume, n.d.). Hence, in this view, being informed X, does not suffice to believe X, if we do not have reasons based on our own perception or prior knowledge (or any other non-testimonial knowledge) that entitle us to believe X.

For the theorists above who argue in favor of the existence of efficient mechanisms of vig-ilance outside the domain of epistemology, good reasons for acceptance of testimony are not necessarily perception or other non-testimonial knowledge, but rather conversational contex-tual factors, especially the speakers’ intentions, their prior trustworthiness, their character etc. And still, the vigilant view, in the lines of reductionist epistemological views, takes it that people should believe information they receive through language if there are reasons for them to do so, independent of the reception of the information itself. Although appeals to human rationality, defined as the quality of being based on facts and reason seem legitimate in the context of philosophy and epistemology, rationalistic arguments are much weaker within psychological theorizing.

(28)

ways whereby we think and behave irrationally, eloquently described in Kahneman’s fascinat-ing book Thinkfascinat-ing Fast and Slow (Kahneman, 2011). More recently, Fiedler (2012) adopted the term metacognitive myopia to refer to an experimentally proved widespread tendency we have to use “even large amounts of stimulus information, whereas [we] are naive and almost blind regarding the history and validity of the stimulus data.”(p. 2). According to Fiedler, this incapacity to accurately use stimulus data becomes evident when people are presented with data that are inaccurate. Extending metacognitive myopia in the domain of language, would actually predict that people may inappropriately believe linguistic stimuli. In sum, there is a large consensus among cognitive and social psychologists that, even if not always, we often demonstrate irrational behaviors. In such a context, arguing that humans have a cognitive mechanism targeted at vigilance because it would make them display a rational behavior is weak, if not accompanied by empirical evidence that we actually are vigilant addressees.

Thus, the evolutionary arguments used by supporters of the vigilant view seem wanting. In the following section I will sketch some evolutionary argumentation to, this time, support the “gullible view”. As evolutionary arguments at this point can only be speculative, I will provide this argumentation more to show that the gullible view can be evolutionarily plausible, than to really base my claim for the truth bias on an evolutionary basis. Empirical data are much more illustrative in this debate, and this is why I will try to review both those that support and those that refute my claim.

(29)

1.3.2 Ontogenesis of vigilance

There is evidence that by the 4thor 5thyear of their life children can distinguish between

differ-ent ways of acquiring information. Specifically, they seem to distinguish between witnessing, feeling, inferring or testimony as different sources of knowing what is in a container (Gopnik,

1988; O’Neill & Gopnik, 1991; Whitcombe & Robinson, 2000). This means that by their 4th

year (but not before), children seem to be aware of the different ways in which they may come to believe something. Additionally, pre-school children are resistant to misinformation suggested to them when the source of this misinformation is unreliable (e.g. a child, a “silly” man, an evidently unknowledgeable adult; Lampinen & Smith, 1995; Robinson, Champion, & Mitchell, 1999; Welch-Ross, 1999). Not curiously, children’s resistance to misinformation is mediated by their capacity to attribute mental states to others. Mental states attribution or “Theory of Mind” seems to also moderate preschoolers’ capacity to take into account the epistemic status of an adult who is teaching them words. Children tend to learn less from a speaker who seems hesitant as for the correct name of a referent, due to lack of knowledge (Sabbagh & Baldwin, 2001; Scofield & Behrend, 2008). More generally, children consistently reject testimonies offered by unreliable speakers (Birch, Vauthier, & Bloom, 2008; Clément et al., 2004; Jaswal & Malone, 2007; Koenig & Harris, 2005). While Jaswall and Malone (2007) and Birch et al. (2008) report this effect already for 3-year-old children, Clement et al. (2004)

and Koening et al. (2005) only find it at the 4th but not 3rd year of life. Interestingly,

Robin-son and Nurmsoo (2009) show that children trust less a speaker who is inaccurate without holding false beliefs than one that does. This demonstrates that children seem to be more vigilant towards intentionally untruthful speakers than towards speakers that are inaccurate by mistake.

(30)

children sensitive to an experimental manipulation presenting the speaker as a liar. What is more, vigilance triggered by speakers’ ostensible intention to deceive is only present at the age of 6. This last piece of evidence is corroborated by a study by Vanderbilt, Liu, and Heyman (2011) who found that only 5-year-old children –but not 4 or 3-year olds– prefer the testimony of a “helpful” vs. “tricker” speaker. Again, complex inferences about the speaker’s intentions are only employed at a a later stage of development, while, once more, this vigilant behavior is itself correlated to a Theory-of-Mind-capacity. Thus, these results show that different triggers of vigilance appear operative at different stages of a child’s development: notions of a speaker’s “goodness” vs. “badness” seem to mobilize vigilance already at the age of 3, while more complex notions of epistemic status and meta-representations of intentions are employed for a vigilant behavior later on.

To sum up, children can display vigilant behavior towards speakers’ moral, epistemic, and intentional characteristics. Speakers’ “valence” seems to be taken into account by children at an earlier stage of their development, while their epistemic and intentional ones seem to demand a more complete development of children’s cognitive capacities, specifically their the-ory of mind. In any case, there seems to be a critical cognitive development between the

3rd and 4th year of life, permitting both meta-cognitive representations of information source

(31)

1.3.3 Vigilance in adults

The second main source of evidence for the vigilant view consists in experimental studies prov-ing that adults may show signs of vigilance when processprov-ing incomprov-ing information. Vigilance, is expected to operate in order to protect addresses from unreliable speakers, either out of ignorance or, most consequentially, due to a conscious intention to mislead. In this second case belong speakers whom addressees do not know really well (thus whose intentions they ignore), or whom addressees know to be bad-intentioned. Such speakers, in turn, are likely to elicit in addressees feelings of mistrust. Thus, a good candidate vigilance trigger is mistrust.

A series of studies have shown that eliciting a mistrustful mindset in subjects makes them adopt strategies of information processing that are not normally used. For example, Schul et al. (2004) showed that while in normal (trustful conditions) people are facilitated in the categorization of a target word when primed with a target-congruent word, when they are experimentally led to be in a suspicious mindset target word categorization is facilitated by incongruent prime words. Similarly, when the solution of a task requires a non-routine strategy, distrusting participants outperform trusting ones, who in turn, outperform the distrusting ones, when the task solution requires use of a routine strategy (Schul, Mayo, & Burnstein, 2008). ? further show that distrust, either as a dispositional personality characteristic or as an experimentally elicited mindset, blocks accessibility effects. Distrust, then, seems to lead to a differential information processing compared to trust, arguably by activating concepts that are opposite to those activated under normal conditions. Although this preliminary evidence suggests that distrust could potentially change the way that incoming linguistic statements are processed and thus moderate their endorsement, to my knowledge no study has directly assessed whether distrust could lead to higher statement rejection.

(32)

puts it, “evaluation is a fundamental semantic notion and a genuine dimension of meaning” (1999, p. 2). According to this view, then, semantic as well as general knowledge will be used by an addressee when he receives a statement, in order for him to interpret it and validate it. A luminous demonstration of this aspect of statement processing comes from Hagoort, Hald, Bastiaansen, and Petersson (2004). They showed that semantic anomalies of the form Dutch trains are sour and world-knowledge violations (Dutch trains are white; in reality Dutch trains are yellow) elicit the same event-related brain potential, namely a negativity around 400 ms upon encountering the violating sentence-final word. Such effects have been found both for locally inconsistent statements, that is, statements that contradict participants’ knowledge about a fictional story they read in an experimental context (Van Berkum, Hagoort, & Brown, 1998); and for “globally” inconsistent statements, as in I have a big tattoo on my back when uttered by a speaker with an upper-class British accent (Van Berkum, van den Brink, Tesink, Kos, & Hagoort, 2008).

(33)

1.3.4 Interim summary

In this section I presented the main arguments put forth to argue for the easy, automatic and routine operation of vigilance in communication and statement comprehension. First, I critically exposed the relevant evolutionary arguments, suggesting that they lack a solid evolutionary basis. Additionally, I pointed out that such arguments rely on the assumption that humans are mostly rational agents, while their cognitive systems are geared at accuracy, an assumption that is not among the premises of current psychological theories. I then went on to expose the relevant empirical evidence for the vigilant view. I mentioned that children display vigilant behavior to some extent, and that distrust and prior knowledge may somehow affect the processing of incoming information.

I will now go on to review the evidence pointing at the opposite direction, the view I am supporting in this thesis: that people have an inherent default tendency to believe statements they understand. Before doing so, some clarifications are needed. First of all, as it will become clear later in the thesis, the fact that I argue for an inherent tendency to believe does not mean that I reject the idea that vigilance mechanisms exist. What I claim, though, is that there is no incontestable evidence that vigilance efficiently protects people from misinformation, as many may think. In other words, it may be the case that distrust or prior knowledge lead to a different information processing, but, as I suggested, this differential processing does not guarantee that (mis)information is eventually filtered out.

Secondly, I do not reject the dynamic context-sensitive psycholinguistic view of language comprehension described above. I do believe that we are very sensitive to the specific context of utterance interpretation. But I argue, that on top of that, or independently of that, language comprehension is intrinsically characterized by a truth-bias.

1.4 The truth bias

(34)

with the first one, has to do with evolutionary considerations. Last, but most importantly, a rich body of experimental literature has identified several phenomena that could be accounted for by assuming that linguistic communication is truth-biased. I review the three axes in what follows.

1.4.1 Belief and Language

Hume’s definition of belief, cited above, makes an important, relevant distinction, between ideas, and the strong feeling of force and vivacity one may have about ideas, a feeling indicating belief. This distinction between the “object” to be believed on the one hand, and belief, on the other is present in much of the belief theorizing (Bogdan, 1986; Gilbert, 1991). Braithwaite (1932) refers to the distinction between a proposition and “the relation in which the proposition stands to a mind cognizing it.” (p. 129). In similar lines, Bergstrom et al. (2006) recognizes the importance of distinguishing between “processing some information and holding it is as true” for “models of pragmatics and cognitive development” (p. 533). In such models, then, propositions are represented in the mind accompanied by a truth-value tag.

(35)

Gilbert however, questions the psychological possibility of the human mind to suspend belief upon understanding a statement. Borrowing from the philosophy of Baruch Spinoza, he argues that comprehension and belief are one and the same thing. Spinoza actually challenged Descartes’ view that will and intellect are distinct, noting that “(...) a particular volition and a particular idea are one and the same; therefore, will and understanding are one and the same” (Spinoza, 2001, p. 21). Gilbert, translated this Spinozean theorizing into a psychological model of statement comprehension whereby believing a statement is entailed by the process of understanding it.

Gilbert and his colleagues set up a series of psychological experiments, trying to provide evidence for this psychological model. We will review these experiments in the next chapter, as the experimental part of this thesis is largely based on their seminal work. For now, I would like to elaborate some more on this Spinozean idea, to assess its plausibility, as a psychological model, in the lines suggested by Gilbert. Gilbert (1991) uses the tagging system mentioned above to describe the differences between the Cartesian and Spinozean accounts. He describes statement validation in the Cartesian lines as consisting of, first, understanding a statement without any evaluative attitude, and then applying the tag “true” or “false”, according to the available evidence at the moment of comprehension. On the other hand, the Spinozean account that Gilbert proposed predicts that all statements we understand are, at first untagged, while only false statements are tagged as false, granted reasons for disbelieving. This, however, Gilbert argues, requires sufficient cognitive capacity. Thus, if cognitive resources are not available, any incoming statement will necessarily remain untagged and, thus be considered as “true”.

(36)

we understand –even if only initially.

The account I wish to develop here shares many of the considerations just sketched but is slightly different from Gilbert’s psychological model. I will develop it in the next section, along with some evolutionary considerations that grant it additional plausibility.

Some contemplation about evolution

Besides Spinoza, the Scottish philosopher Tomas Reid also supported the “gullible view”, with an argument that has an obvious evolutionary counterpart. According to Reid (1895), two principles govern communication: the principle of credulity, that refers to the addresse’s ten-dency to believe what he hears; and the principle of veracity, referring to the speaker’s tenten-dency to speak truthfully. Thomas Reid is seen by many to have inaugurated the “non-reductionist” view of testimony (information we receive by others), which, contrary to the reductionist view described above, justified belief in testimony in its own right, without extra non-testimonial reasons (e.g. McDowell, 1994; Coady, 1994; Goldman, 2000, Weiner, 2003). In this sense, Reid, equates language to other means of information acquisition, such as perception which, for reductionists, was the only reliable source of knowledge.

This view, that language resembles perception, has been defended, more recently, by philosophers of language and linguists. According to such a Direct Perception Model (DPM), acquiring hearsay information parallels visual perception, in that it is automatic, just like retrieval of information from visual perception (Kissine, 2013; Kissine & Klein, 2013; Millikan, 2005; Recanati, 2002). This model opposes inferential models of communication (e.g. Sperber and Wilson, 1995). According to the inferentialists, statement interpretation is accomplished through complex inferences that the addressees make about what the speakers intend to mean in each specific conversational context. Contrary to such a view, the DPM predicts that in most situations, addressees automatically retrieve the contents of statements they understand, just like they perceive the existence of the things they see.

(37)

sense that believing statement contents is automatic, the DPM does not preclude the existence and operation of vigilance mechanisms under specific conditions. Such vigilance mechanisms can then block the direct perception process in some cases, and thus block believing an un-derstood statement. Yet, such precocious evaluation is not the organism’s preference. It does require more processing, as Gilbert assumed, and for this reason, is avoided unless deemed a priori absolutely necessary. Thus, the DPM makes more or less similar behavioral predictions as the model proposed by Gilbert, in that it predicts the operation of a truth bias: unless specific boundary conditions are met, addressees tend to believe statements they encounter. Additionally, given the high cognitive cost of being vigilant, the DPM will be dominant even in cases it should not, in the effect that addressees will many times show signs of believing state-ments they should not. At the same time, the DPM view I propose here is more in line with parallel processing models of cognition (e.g. Rumelhart & McClelland, 1986), putting forth a more dynamic view of the language comprehension process than that assumed by Gilbert.

(38)

Of course, throughout human evolution, encounters with intentional cheaters, common also in primates (de Waal, 1992), or speakers who inadvertently provide inaccurate informa-tion have been present. In this case, the DPM comes at a cost, as the addressee of a wrong statement will end up believing something that is false, with more or less serious repercus-sions. Thus, it is likely, that humans have developed specific vigilance mechanisms aimed at protection from misinformation, targeted for example at detecting dishonest speakers or incon-sistent statements. However, the kind of mechanisms proposed by Kissine and Klein (2013), that I also defend here, are quite different from the ones proposed by the proponents of the vigilant view (Schul et al., 2004; Sperber et al., 2010). For, if vigilance mechanisms exist, it is most likely because they constitute adaptations supplemented to an inherently believing comprehension mechanism, in a classical case of arms race evolution (Kissine & Klein, 2013; Krebs & Davies, 1984): emergence of language renders speakers capable of deception, and addressees try to protect themselves by developing vigilance mechanisms. Actually, as (Kis-sine & Klein, 2013) remark, without the addresses’ tendency to believe linguistic statements, the necessary environmental pressure for the development of mechanisms specific to vigilance would be absent.

Against this background, it actually seems quite plausible that the language interpreta-tion mechanism is truth-biased, that is, biased towards committing type I errors. Given that language emerged in a general context of cooperation, implicating that the majority of the statements in an addressee’s environment will be true, it would be more economical for ad-dressees to directly perceive the contents of statements they receive from their conspecifics without applying effortful evaluative process, presupposed for the exhibition of vigilance. Of course, this scenario entails more false positives (believing a statement that is false) than false negatives (not believing or rejecting a statement that is true). Nevertheless, this should not be very problematic, since the false statements are expected to be rare, and thus, the false positives low.

(39)

Galperin, 2012; Haselton, Nettle, & Andrews, 2005; D. D. P. Johnson, Blumstein, Fowler, & Haselton, 2013). Haselton and Buss (2000) have identified mens’ Sexual Overperception, their tendency to overrate women’s interest in them, as an evolutionary adaptive bias: while a false positive will only lead to wasting time due to chasing an uninterested female, a false negative may lead to missing an opportunity for reproduction. In our case, not believing an incoming statement will be more likely erroneous than accepting it. It, is thus, normal that people be biased towards believing statements they understand. In roughly similar lines, Reber and Unkelbach (2010) have argued that the illusory-truth effect, the tendency to judge an encountered statements as truer than a new one is ecologically valid, in a non-orthogonal paradigm where true statements outnumber the false ones: if encountered statements are more likely true than false, then people are, generally speaking, entitled to judge familiar statements as true.

(40)

1.4.2 Experimental evidence for the DPM

While the general claim that people have a propensity to believe statements they encounter may sound exaggerated to linguists and philosophers, it likely sounds like a truism to psychol-ogists. Many research traditions in psychology in the past decades, have identified a range of phenomena suggesting the operation of a general truth bias, in a wide range of experimental settings and paradigms. I will review them in the next paragraphs, as instantiations of the DPM, and then, I will explain how the studies set out in the next two chapters contribute to this voluminous literature.

Suggestibility. The oldest demonstration of a general information suggestibility comes from

Forer’s, 1949 attempt to avert erroneous reliance by therapists on information self-reported by their patients. Being critical of self-report measuring instruments used by clinicians, Forer tried to show that patients are likely to acquiesce to any suggestion made by the therapist. He set up a very simple experiment. After distributing a diagnostic questionnaire to his psychology students, he came back to the classroom with a personalized character description for each of them, presumingly based on their answers to the questionnaire. Forer tricked his students, as the description he gave to each of them was the same broad description, intentionally constructed to be representative of any psychology student. Nonetheless, his students almost unanimously reported that the description was accurate of themselves, and assessed positively the questionnaire’s assessment. Thus Forer’s students were highly suggestible and acquiescent to the information he provided them with. As we will see next, analogous phenomena can be found in a variety of contexts, even in the absence of a power relation between the provider of the information (i.e. Dr. Forer) and the receiver (i.e. his students)

Statement – Picture comparison. Studies on the way participants process statements

(41)

images with a cross and a star, the one above/below the other. Participants had to press one button if the statement was correct and another if the statement was incorrect of the image. One of their main findings was that participants responded much quicker if a statement was true of an image than if it was false. Actually Clark and Chase integrated this tendency in their model:

As an indication of whether the sentence is true or false of the picture, this index [participants’ response] is initially set at true, under the supposition that the sen-tence is true unless there is evidence to the contrary (...) (Clark and Chase, 1972, p. 479)

Gough (1965) also reports that true statements (with respect to an image) are validated faster than false, and also that affirmative statements are validated faster than negative. Thus, it seems that other things being equal, people seem to a priori assume that incoming statements are true, rather than false.

The hindsight bias. This is one my favorite and most ecological indications of how easily

(42)

What this line of research shows it that people actually automatically integrate information they receive with other prior general knowledge, which may fundamentally alter the judgments they make, and even the way they perceive their prior state of knowledge. Even if participants are aware of this bias and explicitly instructed to counter it, they cannot prevent its impact (Fischhoff, 1977; Guilbault, Bryant, Brockway, & Posavac, 2004; Wood, 1978). Fischhoff concluded “that upon receipt of outcome knowledge, judges immediately assimilate it with what they already know about the event in question.” (1975, p.297).

Witness suggestibility to misinformation. Elisabeth Loftus has initiated a long

tra-dition of studies showing that people too easily assimilate information they read, even if it contradicts their own witnessing experience (see Loftus, 2005, for a review). This line of mis-information studies demonstrates the repercussions of the truth-bias in real life-contexts. In many legal systems across the globe, witnessing is a crucial aspect of juridical processes, where people possessing information about a specific case or event are invited to present themselves in court in order to help clarify “what really happened”. In short, witnesses’ claims often play an important role in courts’ final decisions.

(43)

number of demonstrators, who were actually eight. Thus, interpreting the question they were asked required participants to presuppose or “accept” that the demonstrators were as many as indicated by the question’s presupposition. Interestingly, the presupposition mechanism was evident one week later, when the participants came back to the laboratory and were asked how many demonstrators they saw entering the classroom. On average, those asked about twelve demonstrators answered that there were 8.85 demonstrators, while participants asked about four demonstrators answered that there were 6.40 demonstrators. In a similar study, (Loftus & Zanni, 1975, Experiment 1) asked participants to watch a video displaying a car that violates a stop sign and turns right in a main street, causing a car collision. Participants later asked a question presupposing the existence of a stop sign remembered by 18% more that the stop sign existed, compared to those who were not asked the question.

Yet, probably the most impressive demonstration of the presupposition effects is that peo-ple can “remember” things they never actually saw. In Loftus and Zanni (1975, Experiment 3), participants watched another car accident. While there was no barn in the video, partic-ipants that were later asked How fast was the white sports car going when it passed the barn while traveling along the country road? remembered seeing a barn, by 17.3%. Note that only 2.7% of participants who were not asked a question presupposing the existence of an inexistent barn remembered seeing one. This result corroborates and extends the finding of the previous study. Here participants were not simply biased in their answers by a false presupposition, but were further lead to remember having seen a barn they actually had not.

(44)

question groups (stop vs. yield sign question), resulting in an incorrect vs. correct presup-position condition. Interestingly, the effects of presuppresup-position were now tested by means of a forced-choice recognition test, whereby participants had to choose the original slide from the pair of the the stop sign and the yield sign slides. The effect was again dramatic: while 75% of the participants in the correct presupposition condition correctly identified the slide of the critical pair they had actually seen, the incorrect presupposition group was accurate only 41% of the times. Thus, not only do participants automatically integrate information presupposed by questions they are asked to their background knowledge, but this information is largely in-tegrated in their visual representation of the event. This modality-independent presupposition effect strongly corroborates the view that language operates as a direct perception mechanism.

The “Illusory truth” effect. Another straightforward demonstration of a tendency to

(45)

& Stahl, 2009).

Begg, Armour, and Kerr (1985) unravelled another aspect of the illusory-truth effect, and extended its scope. They showed that old statements can be perceived as truer than new statements, not only if their content is familiar, but even if merely their topic is. While the impact of familiar topics is weaker compared to that of familiar contents, Begg et al.’s results (1985) indicate that information in the human mind is organized in terms of networks of coherent schemata and narratives. This effect gives an additional perspective on the DPM, by showing that people need not directly perceive statements in order to tend to believe them. Rather, if a representation is stored, any statement that activates this representation will also look truer. This corroborates the force of linguistically transmitted information especially in the domains of advertising and propaganda: a topic or product need only be familiar, and every statement disseminated on it will be more and more believable. Actually, Arkes, Hackett, and Boehm (1989) prove the generality of the truth effect, by showing that it holds for trivia as well as opinion statements. Another, very interesting implication that Arkes et al. put forth is that the effect operates regardless of whether participants judge the statements as true or false in the first place.

(46)

run is hard and probably requires a strong trigger in the conversational environment.

Lie and deception detection. Last but not least, empirical evidence that we intrinsically

believe statements, comes from research on lie detection. Ekman and O’Sullivan (1991) tested the capacity of different professional and social groups to detect lying. Members of the law enforcement personnel, such as US secret service agents, policemen, judges, psychologists, other working adults and students watched 10 people either lying or being honest about their feelings. Only the secret service agents performed better than chance (64%). In general it is widely accepted assumption in the lie detection literature that we are poor at discriminating lies from truths, and even if this discrimination is above chance it is mainly driven by accurate truth detection (61%) rather than accuracy in detecting lies (47%) (Bond & DePaulo, 2006; ten Brinke, Stimson, & Carney, 2014). This strand of literature has thus concluded the existence of a truth bias, a tendency for perceivers to trust communicators, that has even been claimed to be due to a social norm dictating that we should trust others because it is offensive not to, even if it s people we are not previously acquainted with (Dunning, Anderson, Schlösser, Ehlebracht, & Fetchenhauer, 2014).

(47)

and social encounters.

1.4.3 Interim summary

In this section I claimed that understanding statements is automatic in the way visual percep-tion is. I argued that this entails an intrinsic tendency to believe statements we understand even though vigilance mechanisms do exist. As the tendency to believe is likely phylogeneti-cally and ontogentiphylogeneti-cally prior to any mechanism of vigilance, the pendulum tilts towards truth bias rather than vigilance. In this last sub-section, I provided consistent experimental evidence suggesting that people tend to believe statements they read and hear, tend to remember them as true and consider speakers as truthful. I now ask your attention for some more technical aspects. In the next section, I will motivate the choice of our paradigm, in view of our central question, and in view of the gaps that the extant literature leaves with respect to this question.

1.5 Some notes on the truth bias and how to measure it

1.5.1 Linguistic constraints.

(48)

clearly have either of the two possible truth-values: it can be either True; or it can be False. Imagine now that You leave tomorrow morning is uttered by a landlord, who is fed up by his tenants’ incapacity or unwillingness to regularly pay the rent. In this case, the statement is to be taken as an order. Note, however, that, as such, the statement cannot be assigned a clear truth-value, as the realization of the statement’s content, i.e. the tenant leaving the house, only depends on what himself will do the next morning. To be sure, a hearer could well doubt as for the veracity or strength of an order. If for example the order You leave tomorrow morning is uttered on the spur of the moment, by one of two romantic partners during a ferocious fight, both the utterer and the addressee might deep down know that the order was not to be taken at face value. But the important thing to keep from this example is that in the former case, the illocutionary function of the utterance is to provide the speaker with information about the world, while in the latter to directly direct his actions.

(49)

1.5.2 Choosing the measures

To the best of my knowledge the literature that has directly or indirectly assessed a tendency to believe statements has failed to explicitly discuss some intrinsic complications of this focal question. The first question is how to measure belief. This may seem like a trivial question but it is not. It presupposes that one has a spelled-out systematic definition of belief. As far as I know, such a thing does not exist, either in psychology or in philosophy. Psychological research on belief, covers a wide range of more specific topics, from religious beliefs to conspiracy theories and from biases in lie-detection or the illusory-truth-effect as displayed in the previous section. So, there is no such thing as a consent on how to best measure the truth bias in linguistic communication.

This is the reason why, studying belief specifically in the context of linguistic communica-tion, at the beginning of this introduction I tentatively defined it with the somewhat layman conception of feeling towards statements with consequences on one’s thought and behavior. However, I purposefully left the term feeling undefined. I will leave this definition relatively open in formal terms, by inviting you to fill the referential gap with the subjective feeling that you have when you believe that a statement is true.

(50)

might be overridden by vigilance (cf. section 1.4.1 above). Another solution would be to explicitly inform participants that some statements are true and some statements are false, in the lines of Begg et al. (1992) and Henkel and Mattson (2011). Now, it would be really awkward to explicitly inform a participant that a statement is false and then directly ask him whether she believes it. A solution to this second caveat is to use participants’ memory, by presenting participants many true and false statements and then testing whether they will tend to classify them as true. According to the DPM participants should display a strong tendency to misremember statements as true, even if they have been denoted as false in the learning phase.

The illusory truth effect already provides some evidence for a tendency to believe incoming statements. However, this line of research relies on trivia isolated statements and such studies are often presented to participants as aimed at identifying trivia knowledge of university students. In such a context participants do not need to be vigilant or filter the statements they encounter. And given that most of the time the statements encountered in real life are generally truthful, it is not so impressive that reading a statement in the illusory-truth paradigms increases its perceived truth. Although this finding is totally predicted by the DPM, as in neutral situations, we should generally believe incoming information, it is only a necessary but not a sufficient condition for the DPM to hold. Remember that the DPM predicts the tendency to believe statements to spill over to conditions where normally addressees should show disbelief. Thus, in order to really back up the this model of communication, we need a context where participants should be vigilant towards incoming information.

(51)

all, affirmatively biased statements were later rated as truer than negatively rated statements, which in turn were rated as truer than new statements.

Similarly, Begg et al. (1992) paired each of a series of trivia statements with explicitly truthful or untruthful sources. After an initial presentation phase of the statements, partici-pants passed a test where they were asked to rate the statement’s truth and state its source (new, true, or false). They found a similar pattern as above: statements of a true source were rated as the truest, yet false statements were judged as more true than new ones. Additionally, discrimination between the two sources (true vs. false) was moderate. Actually, if the source discounting came after the encoding of the statements and their sources participants could nor reliably discriminate between the accurate and inaccurate sources. They, thus displayed a pattern of rated truth where statements coming from truthful sources were judged as equally true as statements coming from untruthful sources.

These studies do provide a somewhat ecological context whereby participants are expected to display some degree of vigilance, that is, due to the negative biases or the explicit informa-tion about the sources’ truthfulness. In such condiinforma-tions, participants seem to take the source into account, as generally speaking, they judge statements presented as true as truer than statements presented as false. But is that difference meaningful? In these studies the truth-effect was measured on a scale, and this means that the difference in truth-ratings between true and false statements is not easily interpretable with respect to our predictions. To clarify, the finding that false statements are judged truer than new but falser than true, is open to motivated interpretations. As a proponent of the DPM I could argue that participants display a tendency to believe despite information on the source truthfulness, as they believe false statements more than new ones. To the contrary an advocate of the vigilant view could argue that participants exerted some degree of vigilance, as they believed false statements less than true ones. Thus, a more straightforward measure is needed to positively prove a truth bias.

(52)

hard to decide given that participants still classified 40% of the new statements as true. As I argued above, in order to overcome the caveat posed by the fact that generally speaking we are entitled to believing statements we encounter, in line with the experiments above, we need to explicitly inform participants of the incoming statements’ truth-value and then test whether they believe them or not. However, we need a measure that will be more easily interpretable than the one I described. To my mind such a measure resides in participants’ errors. A good way of deciding whether the amount of false statements remembered as true is enough to infer a tendency to believe is to compare it to the amount of true statements remembered as false. Comparing error rates is an accurate proxy as it controls for general memory capacity, by taking into account overall error patterns.

Simultaneously, in order for the comparison of the error rates between true and false statements to be a reliable and informative measure of the truth bias, we should assess it in a context where it is reasonable for participants to show disbelief. This means creating an ecological environment where participants should, for some reason, be vigilant towards the statements they receive. In this respect, simple negative biases (e.g. Begg & Armour, 1991) may not monotonically lead to heightened disbelief towards the biased statements, as the fact that others do not believe a statement, does not necessarily constitute a sufficient condition for not believing it. Similarly, Henkel and Mattson (2011) tested the effects of source on the illusory truth effect by informing participants that one source is highly accurate and reliable while the other was not so accurate and thus some of the facts they read in it may not be reliable. While Henkel and Mattson (2011) justify this decision by highlighting that it is very rare that all information coming form a source is false, it is not surprising given this sort of source framing that source reliability in that study did not affect the truth effect. In order to make a strong case in favor of the DPM we have to show that it holds when participants should not believe the (false) statements they receive. This is why, in our studies we presented all information coming from one source as false. If anything, one case where people are not expected to believe a statement is when they know it for sure to be false.

(53)

the illusory-truth-effect paradigms, even those that manipulate the statement source, the statements participants read are not consequential. In that specific experimental context there is no relevance in the information participants read: the material consist in a list of unrelated trivia statements. At this point I would like to remind you of the second aspect of belief included in my definition, one that I have intentionally left out of my discussion so far: the consequences that beliefs have on one’s thinking and behavior. The importance of the DPM, apart from psychological considerations related to the processing of statements we encounter, largely lies on its societal relevance. A tendency to believe statements is important for societies because it implies that information we retrieve from statements can easily impact our thoughts and our actions in it. Thus, it is crucial that the measures of belief in our studies also address this aspect. This is why, as you will see in detail in the following chapter we did not merely rely on participants’ explicit memory of the statements’ truth-value. In addition to the memory measures, we included participants’ judgments. Our statements were designed in such a way that the impact of false statements, if participants believed them, could be visible in their judgments.

(54)

truth bias: false statements should be more confounded with true, than true statements with false. The first chapter provides a series of studies testing these predicted effects.

Once we established these findings confirming the DPM, we wanted to test its boundary conditions. As I have already highlighted, the existence of the DPM does not refute the effective operation of vigilance, under specific conditions. The studies described in Chapter 2 reflect our attempts to unravel the operation of vigilance. Specifically, in the paradigm described above, we added factors that, based on previous research or common sense, are expected to render participants more vigilant. We avoided manipulations that have to do with participants’ background knowledge as a potential filtering mechanism of incoming statements. There has already been research on this topic, some of which I describe in Chapter 2, that either confirm (Isberner & Richter, 2014; Richter, Schroeder, & Wöhrmann, 2009) or refute (Fazio et al., 2015; Wiswede, Koranyi, Müller, Langner, & Rothermund, 2013) the claim that incoming information is automatically and efficiently checked against background knowledge. Those conflicting results already suggest that the debate in this field is still unsettled. Yet, I argue that such a potential filtering mechanism is not necessarily indicative of a language-specific vigilance mechanism. Consistency seeking is a widely accepted goal of human cognition and a concept underlying the theory of cognitive dissonance (Festinger, 1962; Festinger & Maccoby, 1964). Under this perspective, consistency seeking applies to the totality of human cognitive states, including attitudes, beliefs and behavior. Thus, the fact that people may sometimes not believe statements they encounter because they blatantly contradict prior knowledge, is not indicative of a vigilance mechanism evaluating linguistic stimuli, but an indication of human consistency seeking generally speaking. Although arguably visual information is more hardly questioned, under specific conditions the exact same process is expected with incoming visual information.

(55)

previous knowledge, will make you conclude that these must have been your partner’s keys, who happens to have the same key holder as you do. The mechanism through which you reject the belief that your keys are not on the table is the similar mechanism that makes us refute linguistically transmitted information that opposes our knowledge. This mechanism is modality independent and language independent.

(56)
(57)

Chapter 2

An Experimental Investigation of the

Truth Bias

2.1 Introduction

(58)

I propose a more nuanced version of this idea. As we saw in the previous chapter, there is evidence that, at least to some extent, people contest the content of statements they under-stand upon comprehension, if these oppose background knowledge (e.g. Hagoort et al., 2004; Van Berkum et al., 1998, 2008). Similarly, people under mistrust seem to summon alterna-tive processing strategies, undermining the possibility that we are merely “gullible” creatures uncritically swallowing incoming information (Schul et al., 2004, 2008; ?). Pace Gilbert et al. (1990, 1993), I propose that comprehension does not always pass by a stage of believing, as vigilance mechanisms may countervail statements’ content upon comprehension. Neverthe-less, where I do agree with Gilbert et al., is in assigning a privileged role to believing over rejecting statements, as predicted by the DPM. To be entirely clear, although I predict that under specific situations statement endorsement is not automatic, I also predict that people are more likely to believe rather than disbelieve a statement they understand, as conditions causing people to reject statements, such as prior knowledge, are by no means ubiquitous and commensurate to the conditions that will lead to endorsement of the incoming information. In sum, while contrary to what Gilbert et al. predicted, vigilance may not always operate a posteriori, it is unlikely strong enough to counter the truth-bias effects predicted by the DPM in many situations.

2.1.1 Our studies

Références

Documents relatifs

My bedroom is quite small and all the furniture is old, so the room doesn’t look nice.. I just hate my orange bookcase and my

Results including thickness variation of sheet metal, forming force along Z-axis and the Hill’48 stress distribution for the four strategies are presented and compared at the aim

The increase of the vortex structure velocity transport coming from the top of the rear window in the flow direc- tion modifies the flow field velocity induced by the vortex

The model presented here (rosette model) encapsulates the main features of the two by two droplets interaction and makes it possible to predict some properties of the dynamics of

The Fundamental Theorem of Calculus

Medications add to the problem. Opioids are perhaps  the most common class of medication prescribed to pal- liative  patients.  All  opioids  cause 

More specifically, I use (i) only areas for which the share in counterfactual greenbelts is above 90%, (ii) I instrument actual greenbelt land with proposed greenbelt land, and (iii)

We estimate the effects separately for postal code districts where the de facto regulation (rents increase at a monthly rate of at least 0.0413%) should be followed by an immediate