As Bob Dylan once sang, “Ah, but I was so much older then, I'm younger than that now”.
At the start of my training in public health I was presented with the classic evidence pyramid for scientific studies, portraying strength of research evidence. This construction has been an important part of my epidemiological epistemological framework, and without it I would have undoubtedly been the poorer.
Since then, I have practised and taught public health, including research methods, and developed and supervised various research projects. I also sit on a human research ethics committee (HREC). I am often reminded of the evidence pyramid, with blinded clinical trials at the pinnacle, moving downwards through case control and cohort studies, to case series and case studies, and on to ever-weaker designs. The pyramid enables us to assess the strength of clinical evidence and there is little debate about using this hierarchy in assessing the effectiveness of drugs, vaccines, and clinical interventions.
The Introduction to the 2007 NHMRC Australian Joint Statement on Ethical Conduct in Human Research states: “Unless proposed research has merit, and the researchers who are to carry out the research have integrity, the involvement of human participants in the research cannot be ethically justifiable.”1 Problems arise when members of ethics committees equate the most scientific design – meaning as close to a blinded trial as possible – with scientific merit, which somehow becomes synonymous with the most objective, and ethical, way to do research. This misappropriation of the evidence pyramid is a problem: even within the realm of quantitative studies, this assumption cannot possibly be correct.
Clinical trials are the best evidence sometimes– but at other times trying to answer a question with this study design is impossible, even unethical. We need different quantitative designs to provide an overview of the size and shape of public health problems, enabling us to pass a lens over quantitative data so that we can select groups for additional attention, intervention, or special assistance. Today my evidence framework for what qualitative researchers often call ‘positivist’ studies looks more like a continuum, with various designs of equal importance for answering the questions we ask.
Frameworks used in quantitative methods, however, only answer some public health research questions. For example, these cannot tell us why our population thinks the way it does about climate change, why women seek or avoid HRT or screening programs, what Australians think about mental health issues, or how new parents find the responsibilities – and demands – of parenthood. We need qualitative study designs to find out about all those ‘what’ and ‘why’ questions. Qualitative data provide us with a deeper understanding of an issue, usually from the perspective of one or more groups of people most affected by a phenomenon. Critical, ecological and constructivist non-mathematical designs are seeking the answers to different problems, an explanation for an identified effect, or an implied consensus or explanation.
We can, in fact, learn a great deal about rigour from one another. Logically, context provides evidence, and the need for a particular kind of evidence drives the research context.2
A common point of argument, relevant for ethical review is about the number and method for recruiting study participants. There seems to be a belief ipso facto that quantitative studies have large sample sizes and qualitative studies have small ones. Most quantitative public health studies require a sample size calculation based on statistical power as part of the design, so that research resources are not wasted and to avoid Type II error, and need a clear recruitment strategy to ensure that the people in the study are representative of the various populations under study. But not all quantitative studies are large and complex – some work has involved very few people and the December 2005 issue of this journal includes studies of important communicable diseases with sample sizes of one or very few. Conversely, not all qualitative studies are conducted using small numbers of participants. Excellent examples exist which, by comparison with the above studies, include large sample sizes.3 The number also depends on the context.
Which design questions should apply to non-mathematical study designs when researchers are seeking to investigate a question of population importance? HREC members reading the application need to be sure that the evidence resulting from the study will truly represent the population the researcher says it will. In qualitative designs the rationale for recruitment, including the participant groups and the likely numbers of participants, should clearly be derived from the literature review and the theoretical framework for the study.4 The design of qualitative studies requires possible or likely sample size justification to allow recruitment for ‘data saturation’ (if there can be said to be such a thing), and include clear strategies for recruiting these participants. Over the past few years, several useful papers on the rigorous conduct, analysis and interpretation of qualitative research studies have been published in this Journal (see in particular October and December 2007 and April 2009), tackling these issues in various ways not addressed in standard public health research methods textbooks.
HREC members need to be able to apply criteria for rigour in assessing study designs regardless whether quantitative or qualitative. As public health questions become more complex, study design becomes increasingly important. It is high time for public health researchers to abandon the ‘them and us’ of yesteryear and think about rigour in all research. By having and using frameworks for assessing rigour in both qualitative and quantitative methods they achieve equal importance. At last qualitative research might stop being regarded as ‘soft’ research, just as quantitative studies might stop being regarded as ‘just numbers’. So here's to ‘youth-ing’.