Response to: Watson's Guest Editorial, ‘Scientific methods are the only credible way forward for nursing research’, Journal of Advanced Nursing (2003) 43, pp. 219–220, and subsequent JAN Forum pieces, 44, pp. 546–548.
The responses to Roger Watson's editorial (Watson 2003) are deeply predictable, replete with familiar myths and non-sequiturs. Admittedly, Watson tends to state things rather imprecisely (not surprising in the circumstances, as the editorial was originally a contribution to a live debate at the RCN). However, that should not divert attention from the fact that he is pointing towards a question which many of those writing about qualitative methods appear not even to have understood, let alone answered. At its most succinct, the question is simply this: how can qualitative researchers identify and eliminate error?
It is not measurement, per se, that constitutes science. The Greeks could measure things perfectly well, but never developed what we think of as scientific enquiry. What distinguishes scientific method from other ‘ways of knowing’ is (i) the role of measurement in explanatory models (Giere 1999), and (ii) the crucial part played by mathematics, through the statistical canon, in weeding out error (Mayo 1996).
In that context, both responses to Watson's editorial (Draper & Draper 2003, Payne & Seymour 2003) are convoluted exercises in missing the point. Quantification, it is argued, is just one way of knowing among others, ‘but there is no reason in principle why it should always be valued above others’ (Draper & Draper 2003, p. 546). Yes, there is. Quantification is what permits us to distinguish between results that tell us something about the world, and results that do not. That's what makes science special. What does qualitative research (or any other ‘way of knowing’) have to offer in its place?
The history of science is the history of the derivation of canonical methods for inquiring into error. As science grows, so does the range of recognizable sources of error, along with a repertoire of procedures for identifying and eliminating them. When a new source of error is identified, a standard procedure for recognizing it (and controlling for it) is devised. The range of errors, into which canonical inquiries are routinely carried out, is familiar to any researcher. They include errors about real effects, as opposed to accidental ones; errors which arise when an association is mistaken for a cause; and errors which occur when various design assumptions fail to hold. The repertoire of procedures for checking whether such errors have been made, in a range of research situations, is equally familiar. The point is that when the evidence generated by a protocol is in accordance with what a hypothesis under examination predicts, and a series of canonical inquiries (tests) has failed to identify error, then the evidence can be taken as good grounds for (provisionally) accepting the hypothesis.
Since qualitative methods are not generally amenable to the application of error-statistical techniques, the hard question is this: what, if anything, performs the corresponding function in qualitative research?
This is not a question that the responses to Watson's editorial get anywhere near answering. Instead, they wheel out the standard-issue philosophical arguments, almost all of which depend on criticizing views that no-one ever held. ‘Positivism’, say Draper and Draper (2003, p. 546), ‘assumes that there is a ‘‘pure vision'’’. Says who, exactly? This kind of claim is usually supported – as here – by references to other ‘anti-positivist’ authors. Not surprising, really, as no positivist ever made such a goofy assertion. Check out the positivist writers (for example, Hempel 1965, Carnap 1995) and the secondary literature (Giere & Richardson 1997, Friedman 1999, Parrini et al. 2003), if you need persuading. Similarly, no positivist has ever subscribed to the idea that knowledge can be ‘certain’, or to the belief that there is a ‘transcendent and naturally given reality’ (Draper & Draper 2003, p. 546). Quite the reverse. Myths of this kind are kept in circulation simply by constant repetition, and by ‘anti-positivist’ writers who only ever reference other ‘anti-positivist’ writers.
The structure of these straw-man arguments is not hard to see. First, it is implied that there are only two sides to the debate. One of these will be something called ‘positivism’, ‘Cartesianism’, or the ‘quantitative paradigm’. The other will be the preferred alternative: qualitative research, phenomenology, whatever. Next, a selection of silly beliefs is attributed to the ‘positivists’. Then it is pointed out that these beliefs are silly. Finally, it is suggested that since one of the two polarized positions is obviously incorrect, the other one (i.e. the preferred alternative) must be right. The question of how to identify and eliminate error simply gets ignored.
So it is that we find Draper and Draper (2003) ascribing daft views to ‘positivists’ in order to prop up the idea that subjectivity is not a problem. ‘We argue…that qualitative research, in openly acknowledging the place of subjectivity, is a rather more authentic position than the absolute denial of any element of subjectivity in quantitative research methods’ (p. 564). But the positivists do not‘absolutely deny any element of subjectivity’ (see, for example, Reichenbach 1938, Carnap 1950). What they do, in fact, is try to find a proper place for it and, when it comes to determining the truth of things, place quantifiable restrictions on it. The subtlety of this approach is completely missed by Draper and Draper (2003) and Payne and Seymour (2003), who, like many authors, finesse the ‘error’ question simply by ignoring it.
But the ‘error’ question is arguably the most important question facing qualitative research. A serious attempt to answer it was made nearly 40 years ago (Glaser & Strauss 1967). Unfortunately, however, the most significant feature of grounded theory – which involves a type of Popperian falsifiability – has been largely ignored (exceptions include Seale 1999). Instead, there has been a marked swing towards ‘subjective truth’, ‘multiple realities’, ‘postmodernism’, and other ideologies which banish error from the discussion. On these views, there can only be ‘perceptions’, ‘constructions’ and ‘knowledges’, and no-one can ever be wrong about anything. It is sometimes implied that this is a more ‘democratic’ way of thinking about research (Lincoln & Guba 1985). But the most significant point is that error has been rendered inconceivable. That's not democratic. It's psychotic.
I cannot, as yet, offer a definitive answer to the question I have posed (like Watson, however, I do at least recognize its importance). Still, I can suggest three avenues of thought. The first is to go back to Glaser and Strauss, and develop the idea of theoretical sampling along more explicitly Popperian lines. That is, I think, the line taken by writers like Seale. The second is to explore a ‘subjective Bayesian’ approach (Howson & Urbach 1993) to qualitative enquiry. This would imply rejecting falsifiability, but sticking with a mathematical form of confirmation theory. The third is to grasp the nettle, and explore ways in which nursing research can use quantification to develop its pet projects. For example, if ‘caring’ is so important, then develop a mathematics of that.
The third line of thought is the most interesting possibility, perhaps, but also the one most likely to be greeted with scepticism. Anyone who says it cannot be done is simply making a statement of faith, and certainly has not kept up with mathematical modelling techniques, especially those involving nonlinear dynamic systems. If marriage can be quantified (Gottman et al. 2002), so can the caring relationship. And creating mathematical models of caring may actually serve some practical purpose, unlike all the phenomenological studies which offer, as ‘findings’, nothing but a handful of phrases beginning with a participle. I think there is a real opportunity for nursing research here. However, it is not one that will be grasped if the response to Watson's editorial is to say dismissively, with Payne and Seymour (2003, p. 548), that he is merely engaging in ‘outmoded polemic’.