• Aucun résultat trouvé

SAMPleS: hoW MAny inteRVieWS AnD With WhoM?

Dans le document THE GUILFORD PRESS e book (Page 70-78)

Judgment sampling, also known as purposive sampling, is something of an embarrass-ment in survey research. But, as long as you specify the criteria used to make the judg-ments in judgment sampling or explain your purposes in purposive sampling, these approaches are ideal in interview research. Many sampling issues are settled at the design stage. However, in interviews it is common to use leads from one interviewee to learn of other people to contact for later interviews.15 Of course, haphazard, casual, and catch-as-catch-can sampling are just as inadvisable in interviews as in surveys. A researcher will surely learn something, even from participants chosen thoughtlessly, but it will be hard to discern and convincingly portray what has been learned. The main point about sampling when coding interview data is to code for any differences in how interviewees were selected. Did you begin by interviewing a few people you knew, per-haps friends, who were appropriate for the study? Were some interviewees recruited by advertisements asking for volunteers? Were later participants found by referrals from earlier ones? Were any or all of them paid for their efforts? You should code all sources and types of participants. It may be that interviewees who were friends, volunteers, referrals, or paid recruits all replied to your questions in similar ways. But perhaps they did not; it is important to know. So check for and code any differences.

Are your interviews repeated with some or all participants, or are they one-time events? Particularly interesting coding issues occur with multiple interviews. Say you have conducted three interviews with each participant— one introductory interview to set the context with each interviewee and two follow- ups. Initially, your categorization and reading and coding of the transcripts will probably be by interviewee. But it can also be instructive to categorize by stage in the interview cycle; read together and code the set of second and the set of third interview transcripts. Reading and coding by stage is definitely worth exploring. And it is greatly facilitated by having your data indexed in grid-like formats, with each interviewee occupying a line and each attribute of the interviewees and of the interview format, context, and so on occupying the columns.

(See Chapter 4 for illustrations of grids and matrices.)

QueStionS: When Do you ASk WhAt kinDS of QueStionS?

The content of your questions, at least your initial questions, tends to be determined fairly early in the design stages. In this section we focus more on the format and the number of questions. A successful approach to deciding what and how many questions to ask and in what format is to begin with your research questions and your goals (infor-mational, exploratory, theory- testing, etc.), as discussed previously. Ask yourself what you need to learn from the study’s interviewees to help you answer your research ques-tions. Usually a research question includes several concepts; interview questions should include queries to address each of these concepts.

15 Formal versions of this approach to finding participants include respondent- driven sampling and snow-ball sampling. See Vogt et al. (2012, Ch. 8) for overviews. See also Wejnert and Heckathorn (2008).

When Do You Use an Interview Schedule/Protocol?16

Do you arrive at the interview with a detailed series of questions, including follow- up questions scripted to match likely or expected replies? How strictly should you plan to stick to the script? Do you use the schedule mostly before the interview, reviewing it to remind yourself of the topics on which you want to touch, or do you use your list of questions more formally and visibly? No matter how detailed an interview schedule is, some room to improvise is always needed. That is one reason it is useful to pilot-test your questions with at least a few interviewees. Sometimes you’ll discover that your questions are unclear, that they elicit answers you don’t expect, or that they hardly elicit any answers at all. Pilot testing of questions can reduce but never eliminate surprises in interviewing. We have conducted many interviews, some with quite strict schedules, but every one of those interviews has required some improvisation because of unexpected replies to questions on the schedule. Occasionally, we have found that answers to the follow- up questions— made up on the spot in response to unexpected replies— proved to be most interesting and informative in the final analysis. How detailed your list of questions is and how closely you adhere to it is in part a matter of personal style; it also depends on your memory and ability to stick to the point without written guidelines.

We certainly differ among ourselves in this regard. One of us has trouble simultaneously focusing on what the interviewee is saying, remembering the guiding concepts in the schedule, thinking about follow- up questions, and restraining from participating in the discussion a bit too much. For him, the schedule is a necessary crutch.

In terms of coding, the most important thing is to be explicit about what you have done. This is important both for your own analytical purposes and for your readers’

understanding and evaluation of your results. One typical description of lists of ques-tions for interviewees is to say that they are unstructured, semistructured, or structured.

This is far too uninformative either for your own purposes or for those of your readers.

Indeed it is hard to imagine using anything but semistructured questions in a qualitative interview: Fully structured questions would constitute more of a survey than an inter-view, and unstructured questions would be more of a chat than a research dialogue. We discussed the degree of structure of the questions in the beginning of this chapter as one example of the need for explicitness and consistency when coding types of questions.

Of course, the degree of structure is but one of several ways to categorize questions.

Previous scholars have described types of questions and have given them labels. It can be very useful to review one or more of their lists to help you devise your own typology.17

Even when the questions are not highly structured, we often engage in preliminary coding and indexing by structuring the responses in a matrix. For example, in one study the researcher interviewed a total of 22 participants and asked a total of 18 questions.18 In that study, it was very helpful, even before making a transcript, to create a grid as part of the fieldnotes. In such a grid, the rows can be the interviewees (1–22 in this case)

16 Strictly speaking, an interview schedule is the list of questions with places for the researcher to write down the answers. An interview protocol is a list of questions and guidelines for how to ask them. We use schedule to cover both concepts. Another term used by many researchers is interview guide.

17 Spradley’s (1979) text contains one of the better known and widely used categorizations of questions.

18 This example is based on the work of one of P. Vogt’s doctoral dissertation advisees when she was inter-viewing graduate students in different departments.

and the columns the questions (1–18 in this example). The cells in the grid can include checkmarks to indicate who answered which question, and/or they can include brief characterizations of their replies.19 In this example, although 11 of the questions were asked of and answered by every participant, others were less frequently asked and/or answered. (A question can be asked, but not answered; be sure to code for that.) One of the early questions was dropped because, upon reflection, the researcher judged that it was confusing and misleading and was not yielding useful answers. One of the later questions on the list was answered by only a few interviewees because it emerged out of the interview process and was added fairly late in the research; it was raised by percep-tive interviewees toward the end of the first round of interviews.20 The valuable replies to this question subsequently led the researcher to ask it in some reinterviews with the original group of participants and in some new interviews with additional interviewees in a follow- up project.

If you do any even moderately elaborate and systematic coding, you will surely generate several transcripts. They might contain the same data from one master set of transcripts that have been organized in different ways. The two most common organi-zational principles are: (1) all the answers from each interviewee and (2) all the answers to each question. In addition to organizing by interviewee and by question, types of answers to each question— such as informational, attitudinal, emotional, contextual, and narrative— are also very important to code. And you would probably want to code by background differences among the interviewees, such as gender and age. This way you could read, for example, all the answers by female interviewees to Question 14 and compare them with the male interviewees’ replies to the same question. Cross- classifying all of these data categories all but requires computer help, and your work will almost never fit onto 8½″× 11″ pages. It is much easier to scroll through them on a computer screen. (See the later section on computer assistance in coding.)

MoDeS: hoW Do you coMMunicAte With inteRVieWeeS?

The ideal type of a model of an interview is a face-to-face conversation in a quiet and pri-vate place. This was also long considered the best setting for a survey. Survey research-ers have probably been less reluctant than interview researchresearch-ers to move away from the face-to-face model, which is still considered by many to be the best way to communi-cate. It is undoubtedly easier to adapt forced- choice survey questions to other modes of communication, such as individual telephone interviewing, teleconferencing, and e-mail interviewing. Even those who see the face-to-face interview as the principal method of data collection— and they are probably in the great majority among interviewers— are likely to supplement this approach with others. For instance, it is common to communi-cate by e-mail or telephone to set up an initial appointment or a follow- up interview.21

19 See Chapter 4 for an example of such a data matrix.

20 This is the reason it is important to do preliminary coding before having finished the interviews. Schwalbe and Wolkomir (2001) propose a 20% rule: Do preliminary coding after the first 20% and after each sub-sequent 20%. Iterative coding and recoding are also very important in grounded theory. See Chapter 11.

21 For example, Vogt and Nur- Awaleh (1999), in a postal mail survey, asked respondents to respond by e-mail if they were willing to be interviewed by telephone.

Such communications often turn out to be more than a way to make appointments;

conversations relevant to the research frequently occur as part of these exchanges. If so, there is no point in discarding the information obtained in this way, but you do need to label the e-mail responses and the telephone conversations using different codes in order to distinguish them from responses to interview questions.22

Other forms of text-to-text communication and interviewing can also be impor-tant. Still, some researchers believe that the disadvantages of not having face-to-face interactions are too great to surmount. But others find the advantages of electronic, text-to-text communication compelling, and they are many: being unconstrained by geography; eliminating the need to schedule meetings and to find a place to hold them;

having the opportunity to think before you ask the next question; and giving inter-viewees a chance to think before they answer. Finally, and perhaps most important, the interviewees’ replies arrive on your desktop already transcribed, thus saving hundreds of hours of work.23

If your interviewees are busy, they may not be willing to be interviewed in any way other than by telephone or via the Internet, and some prefer to avoid face-to-face meet-ings for other reasons, such as privacy. You often have to accommodate your interview-ees’ wishes.24 Here the point is that it is crucial for you to code for those accommoda‑

tions, even fairly minor ones. For example, one of us found codable differences in length and detail between the transcripts of interviews conducted in a quiet office versus those that occurred in a busy coffee shop. The interviewer found the interviews in the office more tiring and even a little stressful, but the transcripts were more complete and richer in detail than the transcripts of the interviews conducted in the coffee shop.

There may be no substantive differences in what you learn in different types and settings of interviews. If there are not, then you can pool and analyze responses together, but it is unwise to simply assume there will be no differences. You can learn whether there are differences by using codes to specify the variety in types and settings and investigating whether they coincide with substantive differences in interviewees’ replies.

As with other such coding problems, we like to arrange the coded categories in a grid or matrix. For example, each line on the grid would be an interviewee, and each column would be one of the variables we have been talking about: face-to-face, telephone, in the office, in the coffee shop, and so on. The grids can easily become too large to handle on paper. The easiest way we know of to manipulate such data occurring in many catego-ries is to use a spreadsheet program (see the later section on tools).

One mode of communication with interviewees is a special case and raises specific issues of coding: the focused group interview, or focus group, for short. The basic idea of a focus group interview is for the researcher to moderate a discussion focused on a particular topic. The participants usually share some characteristic that makes them suitable for answering the research question. They may be potential customers for a new product, graduate students working on dissertations, undecided voters in

22 This may raise ethical issues involved as well, because most consent forms we have seen do not cover such incidental sources of information. They should. See Warren et al. (2003).

23 Markham (2007).

24 On e-mail interviewing, see Hessler et al. (2003) and Menchik and Tian (2008); for telephone interview-ing, see Stephens (2007).

the next election, and so on. Most of the issues in coding and analyzing focus group data closely parallel those for analyzing individual interview data.25 But there are also important differences. The most salient of these is your unit of analysis. Is your focus on the groups or on the individuals in the groups? This is determined by your research question. Does your question involve, for example, how groups deal with problems, or does it address how individual beliefs are influenced by the group discussions? Or are you using focus group data as a kind of survey data, to generalize to a broader population?26

Some researchers think of focus groups as a cost- effective way to interview many individuals. Indeed, 10 focus groups could easily yield data from 100 individuals. But if you are really interested in individuals as individuals, you would almost certainly be better off conducting 20 full interviews with individuals. What people will say in a dyad composed of interviewer and interviewee is likely to differ importantly from what peo-ple will say in a discussion with several other peopeo-ple. On the other hand, focus groups are particularly appropriate for studying group phenomena. You could be interested in groups, or in interindividual dynamics in groups, or in group learning,27 or in the for-mation of collective identities.28 Finally, using evidence from focus groups to improve survey questions or to help interpret them is another frequent practice, and one that has been very fruitful of insights.29

The practical issues with coding focus group data mostly revolve around the fact that it can be hard to keep straight who said what. As in any discussion, people in focus groups talk at the same time, talk to their neighbors but not the group as a whole, lose interest and check their cell phones, and so on. A good moderator can keep these problems to a minimum, but they can probably never be eliminated without exerting so much control over the group discussion that it loses most of its spontaneity value.

Most focus group researchers recommend having an assistant moderator so that the two of you can divide up the tasks of guiding the discussion and making a record of it.

Teleconferenced focus groups are possible, and the technology may eventually improve to the point that these become more common. In the current state of technology, we have found that textual focus groups— for example, using online bulletin boards— are more effective than teleconferences. But this is a personal observation; we know of no research on the topic.

Finally, it can be quite difficult, more than with individual interviews, to keep the group sessions distinct in one’s mind, especially if the group interviews are all conducted in the same room at the same time of day. That means that it is very helpful to imme-diately take fieldnotes that include memory aids, such as distinguishing characteristics (and perhaps a nickname) for each group. Video recordings are also often a useful sup-plement to an audiotape for helping you recall which group was which and for sorting out who said what.

25 A good source for coding focus group data is Krueger (1998), especially Chapter 8. See also Macnaghten and Myers (2007).

26 Eby, Kitchen, and Williams (2012) provide an example.

27 Wibeck, Dahlgren, and Oberg (2007).

28 Munday (2006).

29 For very good examples, see Goerres and Prinzen (2012) and Lindsay and Hubley (2006).

oBSeRVAtionS: WhAt iS iMPoRtAnt thAt iSn’t SAiD?

Your interview transcripts should always be combined with your observations in for-mal fieldnotes (see Chapter 4). One of the most important kinds of observations is of interviewees during the interviews. In the example discussed earlier, in which interviews were conducted with 22 doctoral students, the following summary of observations was made. Of the 22 interviewees, 13 were eager to communicate and were the kinds of par-ticipants the researcher had expected. Three were perhaps too eager to communicate:

They were delighted to have someone listen to them; they often free- associated about all manner of topics, many of which were only tangentially related to the specific ques-tions they were asked. Another three were nervous or shy or embarrassed or otherwise reluctant; it was hard to specify why exactly, but they were somewhat uncommunica-tive. Two others were “super- participants,” who were as interested in the research as the researcher; they volunteered to fact-check notes and recruit other participants and said they were looking forward to reading the study; one even said that maybe she’d do something similar for her dissertation someday. These two super- participants were the kinds of people anthropologists refer to as “key informants.”30

Finally, and at the other end of the spectrum, two interviewees seemed hostile

Finally, and at the other end of the spectrum, two interviewees seemed hostile

Dans le document THE GUILFORD PRESS e book (Page 70-78)