• Aucun résultat trouvé

3&4 theory

4. Related studies

6.9. Discussing the study

The current thesis is focused on human interaction mediated through computer technology. The socio-technical system was tested in a field trial in April 2002 in Sweden and Norway. In this section a few crucial issues regarding the study will be discussed.

6.9.1. Methodological issues

The quest for reliable and valid results is a general concern in scientific research.

However, as mentioned in section 5.1, these requirements do not apply in quite the same way within the qualitative approaches as the case is with quantitative research. It lies in the nature of the qualitative case study that one is not pursuing results that are thought to be valid and reliable across large populations and many different situations.

Rather, the goal is less ambitious, as one typically strives for objectivity by describing social interaction in ways that can be subjected to empirical testing (Peräkylä, 1997).

Consequently, the question is: How trustworthy are the findings presented in the current thesis?

Reliability

As described in the Method chapter, the data sampled for this study are based more or less entirely on videotaped or computer-logged material. And as a consequence, other researchers may re-examine the findings described. This is an opportunity that is rather uncommon compared to typical ethnographical studies that are often impossible to repeat. Still, seeking to solve the reliability issues by taping introduces new problems, as going from the taped material to analytic findings is a far from straightforward task. Peräkylä (1997) points out that capturing social interaction using videotapes typically introduces problems with inclusiveness. One must, for instance, acknowledge that it is practically impossible to fully capture interactions with longer temporal spans and events that are ambulant in nature. Other questions that need to be assessed are how much to record in the first place, the technical quality of the recordings, as well as the adequacy of the transcripts (ibid.). Jordan and Henderson (1995) also point to the fact that videotapes, as a representation of reality, should be questioned. “In analysing a tape, we are then dealing with a transformation of that world, and not simply with an objective, faithful re-presentation” (p. 19, italics in original). They further point to how the operator of the camera determines what is visible and audible, and what is not. In addition the technology has limitations in itself, as it is far more restricted than the human sensory apparatus.

In the current study, the interactions happen in a synthetic environment that is isolated, controlled, and predictable regarding temporal aspects. The tasks undertaken by the teams are clearly stated, time is limited, and there is only one room where

everything happens. In the virtual ER all interactions observable to the whole team, are also captured on the videotape. Hence inclusiveness and limitations of the technology can be considered a minor problem. Intuitively, the limited planes of action in the synthetic environment make it easier to claim that reliability issues represent less of a challenge than what the case would be in a natural, and thus richer environment.

Validity

The concept of validity in qualitative studies concerns the quality of interpretations, or as Silverman (2000) puts it, “[…] another word for truth” (p. 175). As to the extent of how the findings relate to validity, I claim that this is mostly sought for by making visible the process of analysis. Thus hopefully providing what is labelled ‘apparent validity’ (Kirk & Miller, 1986). In practice, this is done by describing both the steps of the analysis, and by including transcript excerpts. However, there is always the problem of ‘anecdotalism’ when it comes to such issues. By relating to qualitative researchers in general, Silverman asks: “How are they to convince themselves (and their audience) that their ‘findings’ are genuinely based on critical investigation of all their data and do not depend on a few well-chosen ‘examples’?” (2000, p. 176). Often the solution to this is to suggest method and data triangulation and/or respondent validation. However, such solutions cannot guarantee validity. Triangulation attempts to get a ‘true fix’ on a situation, an assumption that is basically incompatible with qualitative research. And moreover, respondent validation becomes a problem if one assumes that respondents have privileged status as commentators of their actions (ibid.).

In an attempt to improve the quality of the interpretations, a certain style of working was adapted in the course of the analysis. Following the recommendations of interaction analysis (IA) we jointly viewed tapes, transcripts, and log-files. For instance, Jordan and Henderson (1995) state that some biases on part of the individual analysts are challenged by group work undertaken in the IA-lab, as “[c]ollaborative viewing is particularly powerful for neutralizing preconceived notions on the part of researchers, and discourages the tendency to see in the interaction what one is conditioned to see or even wants to see” (p. 9). Additionally, one of the main ideas of IA is that assertions about what is happening on the tape should be grounded in the

materials at hand, and thus escaping the temptation to engage in ungrounded speculation (ibid.).

Eventually, as it is practice that matters, the question of validity cannot be assured by merely theoretical accounting. In other words it is you who must draw the bottom line, by choosing to approve or disapprove the claims put forward.

Chapter summary

In the current chapter I have made attempts to show how the process of analysis was conducted. I have sought to explain findings of unexpected sequence errors based on the data log analysis. Through a detailed analysis of the interactions I have found that the participants encountered certain difficulties with interpreting the application.

In the following chapter I will turn to a more general discussion regarding the MATADOR project and application. In addition, as an ethnographic ‘gesture’ I suggest a few ways to possibly improve the application, and finally a conclusion is drawn.

7,8,9

discussions