Vous êtes sur la nouvelle plateforme d’Érudit. Bonne visite! Retour à l’ancien site


Hale, Sandra and Napier, Jemina (2013): Research Methods in Interpreting. A Practical Resource. London/New Delhi/New York/Sydney: Bloomsbury, 456 p.

  • Daniel Gile

…plus d’informations

  • Daniel Gile
    Université Paris 3 Sorbonne Nouvelle, Paris, France

Couverture de Volume 61, numéro 2, août 2016, Meta

Corps de l’article

For a number of years, the need has been felt and often voiced to provide beginning researchers in Translation Studies in general and in Interpretation Studies in particular with a research textbook. Many methodological papers and at least one collective volume have addressed research issues (for interpreting, see in particular Gile, Dam et al. 2001), but this is the first textbook which seeks to systematically cover the needs of beginners for the whole interpreting studies field.

Intriguingly, the book has no introduction or foreword explaining its purpose and use. It jumps straight into the subject matter and only mentions that it targets first and foremost research students undertaking Master’s or PhD projects on page 210.

Chapter 1 explains what research is all about. This is a good starting point because so many misconceptions are widespread among students. Equally laudable is the use of a simple, clear, didactic language – found throughout the book – which explains why interpreters and educators who seek answers to questions might gain reliability when adopting research in their quest, as other types of exploration suffer from a number of potential traps – listed on page 3. What is perhaps more debatable is the definition of research as “finding answers to questions by collecting evidence from different sources that will support a logical conclusion” (p. 2), which suggests that research is necessarily empirical. It can also be theoretical, as is the case in mathematics, in theoretical physics and in numerous theoretical contributions to IS. Why exclude this aspect?

On page 12, the authors offer an interesting table showing exploratory, descriptive and explanatory goals in research. Another good idea. But on the same page, they slip when they say that “in the main, quantitative methods are high in reliability and low in validity and qualitative methods are high in validity but low in reliability.” Why should quantitative methods be low in validity? And why should qualitative methods be low in reliability? Validity and reliability are of critical importance in any study, and measuring (quantitatively) a phenomenon with a poor sampling method results in low reliability. Also, there is no reason why qualitative research, for instance when checking what interpreters consider the most important quality components, should not be designed in such a way as to be reliable.

In the same chapter, Hale and Napier make very good points that are seldom found in research guidance texts. Including warnings against idealized representations of reality: they explain (p. 4) that research is never static and never infallible. They quote Rudestam and Newton who say that the only universal in scientific knowledge is a general commitment to using logical argument and evidence to arrive at conclusions, and that good scientists often deviate from an “official” philosophy of science and a prescribed methodology. They also note that science cannot answer all questions. All these are important points that should be kept in the minds of young researchers at risk of being blinded by their enthusiasm and perhaps by an excessively naïve view of science as it is presented in ‘official’ discourse and in some textbooks.

Another good idea was to devote a full chapter, chapter 2, to critical reading and writing. This is a major component of research, a source of knowledge and inspiration, and an exercise which helps sharpen one’s logical and critical thinking skills. The chapter is systematic, with explanations about the importance and uses of the literature review and excellent practical recommendations: keeping track of one’s searches, making sure that quotes are clearly documented, thinking of the literature review as leading readers on the start of a journey towards the author’s research study. This reviewer can only approve of advice so similar to his own – see inter alia Gile, Dam et al. 2001.

The discussion of research methods proper starts with chapter 3, on questionnaires in interpreting research. Common sense and presumably experience-based comments of great value to beginners are offered, for instance on the necessary skepticism towards the respondents’ answers which are often subjective (p. 52), on the advantages of triangulation through interviews and focus groups (p. 53), on the need to make questionnaires easy to answer and not too time-consuming (p. 55), on the desirability of including in multiple-choice questionnaires questions asking for open comments, on the need to pay attention to wording, which is far trickier than people assume (p. 62-63), on piloting (p. 63, 67), on traps such as double-barrelled questions and questions with embedded clauses. Sampling is also addressed (p. 67-73), as well as a few fundamental statistical concepts such as correlation and significance – with inaccuracies and errors, as is explained later in this review.

Chapter four is devoted to ethnographic research on interpreting, a safe distance from statistics. Again, with very good points, including a citation of Nunan’s which stresses that “true ethnography demands as much training, skill, and dedication as psychometric research” (p. 85). Such points are important to make, to counterbalance the trend seen in many to mistake the mastery of techniques for scientific competence. A somewhat cryptic statement is that “the core component of many ethnographic studies is the use of interviews or focus groups” (p. 88). The authors do not say how often this is the case, as opposed to the case where the core component of the study is field observation or participant observation. The chapter offers valuable tips on planning, conducting and analyzing interviews and focus groups. Also note a section on case studies, the value of which is often ignored in the literature.

Chapter five is an introduction to discourse analysis. It explains the very concepts of discourse and discourse analysis, notes that there are basically two approaches to discourse analysis, the top-down approach and the bottom-up approach, offers an exercise, a sample analysis of a text, a step-by-step guide to discourse analysis, and advice on transcription and on corpora.

In chapter six, Hale and Napier are back in tricky waters, when taking up the topic of experimental research, “a way of determining the effect of something on something else” (p. 150). Yes, this is often the case, but no, not always. Experiments can also be exploratory. Similar clumsily worded sentences follow: “The quantitative notion of experimental research is the fact that variables can be measured or counted” (p. 150); “By identifying, isolating and eliminating or introducing a range of variables, it is possible to ascertain the extent of impact on the ‘thing’ being investigated” (p. 151). On page 155, to illustrate experimental research, a study is cited which does not meet the conditions for “true experimental research” set out on the previous page – and which suffers from other flaws. On pages 167-168, sampling is taken up again, and the authors reiterate the mistaken idea, already formulated in chapter 3, that sample size (as opposed to the representativeness of the sample) determines how “questionable” the results are. Not only does this chapter restrict its discussion to one type of experimental research only, which is counter-productive in a field where so much remains to be uncovered using exploratory methods, including exploratory experiments, but even this discussion leaves much to be desired, in spite of a systematic approach, especially in section 6.3, on “basic principles of sound experimental design.”

Chapter seven could be considered an ‘application chapter.’ It is devoted to a research environment, namely research on interpreting education and assessment. Various approaches, including surveys, naturalistic/qualitative methods, experimental methods, historical/documentary methods, role-plays and action-research make it the most inclusive chapter in the book. Many examples are presented.

In chapter eight, the authors seek to “tie together the content of the previous seven chapters” and highlight a few important points. They start by discussing the choice of traditional vs. innovative design, mentioning in particular mixed-methods research, move on to the question of where researchers position themselves, in particular on the translation studies map proposed by James Holmes in the 1970s, then take up various issues that arise when reporting one’s results, including the various formats of theses and dissertations, the use of the first person singular, and the existence of ‘theses by publication.’ They discuss results dissemination and even research grant funding applications.

A long 18 pages list of references follows, as well as a short four and a half pages concept index, which could have been made more comprehensive in view of the wide range of topics addressed and points made in the book.

In view of its general approach, the simple, generally clear language adopted, the large number of concepts discussed and recommendations offered, the numerous pieces of common-sense based practical reminders and awareness-raising comments, this book has the potential of being an excellent tool for the training of beginning researchers. Unfortunately, a number of serious weaknesses, mostly around statistics and experimental research, prevent this reviewer from recommending it to trainers or research students without explicit warnings about inaccuracies and misrepresentations around fundamentals.

Starting perhaps with the idea, on page 4, that if the findings of a second study corroborate those of a first study, the hypothesis is “proved”! The final verb may be a reflection of the authors’ wish to use simple language, but the resulting formulation, implying that a replication is enough to “prove” a hypothesis, is damaging to their credibility. The very concept of “proof” in research is problematic, except in mathematics and other fields where logic reigns supreme, unmarred by the complexities of reality.

In chapter three, Hale and Napier write that in quantitative studies, the larger the sample, the less “the likelihood of errors.” When samples are representative (and only then), as their size increases, the magnitude of the sampling error (the difference between values measured on the sample and the values in the population that they approximate) decreases, but such sampling error is not an ‘error’ in the ordinary sense of the word, and some ‘error’ of this kind is virtually certain to occur whatever the size of the sample. On page 73, they write: “there is nothing wrong with convenience samples.” Is there “nothing wrong” with samples in which the potential bias is not known or addressed? What they probably mean is that in many environments, including the interpreting environment, it is an accepted practice to use convenience sampling because it is extremely difficult to do otherwise. On page 78, they discuss briefly the very important concept of statistical significance, without explaining what it means. Then they add: “if your results are statistically significant, they can be taken seriously and inferences can be made to other populations.” This not only does not make sense in statistical terms (inferences are made from a sample to one population which it is assumed to represent), but also sends out the wrong message about significance and about the value of findings of empirical research in general: findings should be taken seriously if a study was well conducted, regardless of whether inferential statistics have been used or not, and if they were, regardless of whether significance was found or not. What they may have meant is that journals prefer to publish papers which report clear-cut results. But this is not the same thing, is it?

On page 78, Hale and Napier explain correlations “as any links that there may be between questions,” and on page 79, they say that independent variables are called as they are “because they cannot be affected or changed by other variables.”

It is unfortunate that such infelicities and inaccuracies were not detected before the book was published. But they can easily be corrected in a revised version, and when this is done, the result could be an excellent reference tool for research training.

Parties annexes