Researchers are often tempted to use multiple methods. For instance, ethnographers often seek to combine observation with the interviewing of 'key informants.' Similarly, in the illustration discussed earlier on oncology clinics, simple tabulations were used to test field observations. An excellent illustration of a recent study using multiple methods is set out below. This section concludes with a note of caution on the subject.
Illustration: Software on the Ward
Ross Koppel (2005) used multi-method research in a study of computerized physician order entry (CPOE) in a U.S. hospital. This study arose by accident when Koppel was doing a study of the stress experienced by junior house physicians for two essential reasons. It turned out that the CPOE system produced not only stress among these doctors but a noteworthy number of errors (although, as Koppel points out, some of these errors may not be experienced as stressful at the time). Moreover, although studies had been completed of how CPOE worked, these were purely quantitative and none were based on interviews and observations of physicians.
To establish the extent of the phenomenon, Koppel constructed a multi-method study which incorporated:
The prescribing errors discovered included doctors failing to stop one drug when they prescribed its replacement, confusion of which patient was receiving the drugs, and confusing an inventory list for clinical guidelines.
In the United States, it is estimated that medication errors within hospitals kill about 40,000 people a year and injure 770,000. According to Koppel’s study, it turned out that CPOE systems can facilitate errors. Ironically, CPOE was most useful at stopping errors with few dangerous consequences.
In particular, the way in which CPOE had been programmed had two unfortunate consequences:
Fragmented data displays meant that physicians had difficulty in identifying the specific patient for whom they were prescribing; and
The system did not work in the way that doctors worked and created confusion or extra work to address the ambiguities.
Given the amount of government and industry support for CPOE, it is not surprising that Koppel’s findings were both treated as highly newsworthy by the national media and also came under immediate attack. Many medical researchers suggested that such qualitative research could not produce “real data.” The manufacturers of CPOE systems launched a campaign which said that Koppel had “just talked to people” and reported “anecdotes.” In particular, the public were told, Koppel’s study was faulty because it offered no measure of adverse drug events and had identified no ‘real’ errors but only “perceptions of errors.”
Koppel’s study is a fascinating example of what can happen when qualitative researchers stumble into what turns out to be a controversial topic. It reveals that the power of vested interests can work to denigrate qualitative research in support of a hidden agenda. In this way, the key strength of such an ethnographic study (its ability to depict what happens in situ) is presented as a weakness.
Now a note of caution. The desire to use multiple methods sometimes arises because novice researchers want to get at many different aspects of a phenomenon. However, this may mean that the topic has not yet sufficiently been narrowed down. Sometimes a better approach is to treat the analysis of different kinds of data as a 'dry run' for the main study. As such, it is a useful test of the kind of data which can most easily be gathered and analyzed.
Moreover, mapping one set of data upon another is a more or less complicated task depending on one’s analytic framework (see triangulation in Glossary). In particular, if the researcher treats social reality as constructed in different ways in different contexts, then one cannot appeal to a single 'phenomenon' which all the data apparently represent.