In 2010, this Journal faces a new dilemma. Manuscript submission rates to the Journal increased markedly over the past year. The standard of submitted papers has also improved. The fortunate aspect of this is that we can continue to increase the academic quality of the Journal, but the flip-side is that our rejection rate must rise. In the past few years the rejection rate has increased from 30 per cent to 50 per cent and it will need to be higher still. In order not to overburden the administrative staff, the Editors and our long-suffering panel of reviewers, we anticipate more stringent criteria for acceptance. More papers will be rejected without review, after review, and, very importantly, following an unsatisfactory response to reviewers' comments in revised papers. With such changes, it is important to be clear about our decision making.
One obvious reason for rejecting a paper is that it reports findings that are not particularly novel, and do not advance our understanding of public health issues or how they might be better addressed. Apart from this, the most common reason for rejecting a paper is because of research method. Here we provide a bit more detail about the methodological considerations for accepting or rejecting papers.
Public health is a multidisciplinary activity: we draw on the skills and insights of people working in a broad range of increasingly diverse sub-specialities. Our editorial process and decision making has to be cognisant of the highest current standards in each of the component disciplines. However, in addition, we serve a broad public health practice community, and many of our readers are interested in whether the results of a research study can be applied with confidence, for example, in the patients in a general practice, in communities in a suburb, with other vulnerable groups, to the whole Australian or New Zealand population, or internationally. Can health policy makers here or elsewhere be confident that they can base policy decisions on its conclusions? In order to contribute to the public health knowledge base and, at the same time to satisfy the needs of practitioners, we need to know that the design and conduct of any study justifies the conclusions reached in the paper. We also need to know to what extent the findings apply to groups or settings other than those in the study. Authors need to make these arguments, and make them accessible both to readers who are familiar with an area and research style but also (and arguably more importantly) to readers for whom this is unfamiliar territory.
In the ideal case this is easy. In studies with a large population, careful sampling, well-developed research methods and a good response rate, we may be confident that the results are trustworthy and that they are likely to apply not only to the sample but to the whole population, and sometimes even to a national population. Moreover, if it is a well-conducted study and it tackles an important question, a negative finding is no barrier to publication in this Journal.
This ideal may be unattainable. In public health research there are often methodological, practical and ethical limitations to the way in which is study is conducted. If authors can demonstrate that the study addresses an important issue that is under-researched, or difficult to research, a paper with less than ideal methods and outcome is still eminently publishable. However, we do need authors to acknowledge in the methods section of the paper that they faced methodological problems and to show how they addressed them. In the discussion section, authors should acknowledge limitations and conclusions should be explicit about the extent to which the results are generalisable to population groups other than those in the study.1
Given the range of skills of the team of Editors, we are well able to judge articles that depart from the methodological norm but that still make an important contribution to public health knowledge. We have published single case studies, for example, including one that alerted us to the risk that a strain of Legionella bacteria usually associated with cooling towers may also be found in the soil in a plant nursery. The point here is that the single case was an exception to a well-established knowledge base and the paper argued that we need to be alert to this exception.2
Methods for systematically reviewing the literature have improved greatly and systematic reviews are, of course, recognised as research projects in their own right. The results of individual studies can now be assessed in the light of the systematic review of similar studies. It is even possible to combine quantitative studies, as a block or by temporal accrual, regardless of individual effect size, for more robust and statistically significant results. Systematic reviews of the literature are also useful for demonstrating that there is a great deal, a little, or nothing in the way of good evidence in a particular field, thus justifying a new study, ideally designed to overcome the limitations of previous research.
Central to the methods of systematic review is the rule that researchers should state the methods used in their serach of the literature and specify the criteria for assessing the quality of studies included in the review. Defining criteria for judging the quality of a study is not contentious in most of the health sciences but, in the social sciences, there is a vigorous debate about whether this restricts our knowledge base — and many of our authors share these views. Norman Denzin, an influential US-based qualitative researcher, has argued persuasively that specific criteria for judging the quality of qualitative research are to be resisted as part of a restrictive ‘global audit culture’. He emphasises the need for ‘a narrative of passion, and commitment, a narrative which teaches others that ways of knowing are always already partial, moral and political.’2 We are sympathetic to Denzin's views. But where does that leave editors, reviewers and readers who are perhaps less versed in the sometimes esoteric narratives that underpin Denzin's approach to research? Our view is that the practical intent of public health research means that we need to know to what extent the findings from qualitative research are valid and apply to similar groups elsewhere or to other social groups. We are aware of the desire of researchers to be true to the voices of their participants, but, at the least, we need an account of the experiences of the full set of participants and not just those who express their views in a moving narrative.4
And so to return to the future of the Journal. We need to see plainly the methods our authors used in their research — regardless of the kind of papers they are writing, conclusions need to be underpinned by valid, solid, ethical research. In addition, we are pleased to consider provocative but well-substantiated commentaries that challenge us to think about contemporary practice of public health in Australia, New Zealand, and internationally. Given the pressure on journal space, smaller word counts are preferred unless a large word count is justified because of the topic or the method. Tables with superfluous data will come under particular scrutiny. Our necessarily tightened criteria will lead to some unhappy authors, but will also bring you a richer, tighter, more scientific journal.