Back to the future: little-used tools and principles of scientific inference can help disentangle effects of multiple stressors on freshwater ecosystems
Article first published online: 15 JAN 2010
© 2010 Blackwell Publishing Ltd
Special Issue: Multiple Stressors in Freshwater Ecosystems
Volume 55, Issue Supplement s1, pages 60–79, January 2010
How to Cite
DOWNES, B. J. (2010), Back to the future: little-used tools and principles of scientific inference can help disentangle effects of multiple stressors on freshwater ecosystems. Freshwater Biology, 55: 60–79. doi: 10.1111/j.1365-2427.2009.02377.x
- Issue published online: 15 JAN 2010
- Article first published online: 15 JAN 2010
- (Manuscript accepted 16 November 2009)
- human impacts;
- sampling theory;
- survey design
1. There are multiple tools for scientific inference that seem rarely used in research examining the effects of stressors on rivers caused by human impacts. Very few of these scientific tools are ‘new’. While foundational to scientific methods, they seem to have been overlooked or forgotten. The thesis of this paper is that, by looking back to what used to be considered basic knowledge about scientific methods and the discipline of ecology, we may re-learn some useful ways of improving survey designs and re-framing scientific questions.
2. Two common barriers to strong inference are examined in detail in this paper: disentangling the effects of different stressors, so that we can confidently infer which ones are the causes of unacceptable environmental changes; and dealing with high variability among replicate observations. Poor information about causality means managers cannot know what rehabilitation or amelioration should be attempted. Poor fits of models to data lower confidence in inference. Commonly proffered solutions, which include large sample sizes; choosing ‘representative reaches’; or using complex multivariate statistics, do not solve these problems.
3. The solutions lie within the basic components of good experimental design, which apply as much to surveys as they do to experiments. Several pieces of practical advice are offered and explained, which include (i) the necessity to specify a precise mechanism of cause and effect in hypotheses, and what changes to common approaches this entails; (ii) some difficulties caused by scale-ups that are implicit in the selection and measurement of variables, which necessitate changes to some standard protocols; (iii) the value of planned comparisons in surveys as ways of strengthening inference and employing approaches, like control species, where other forms of controls cannot be gained; (iv) the necessity to view random sampling as essential to the selection of sites, which means we should abandon the notion of ‘representative’ reaches; (v) to use sample compositing and sub-sampling to optimise sampling effort at those replicates that provide degrees of freedom for hypothesis tests while cutting costs (vi) to be open to new forms of analysis, like quantile regression, which tests non-traditional hypotheses about constraints, rather than mean or central responses, and which deals much better with sorting between the effects of multiple stressors.
4. Thematic implications: sorting between the effects of multiple stressors caused by human impacts needs the best possible scientific inference we can apply. Common forms of studies in the modern stream literature suggest we collectively know less now than we did 40–50 years ago because some fundamental aspects of strong inference and basic knowledge in ecology seem to have been forgotten or lost. This raises questions about the quality of ecological training provided at universities. Although some aspects of good design are seen as ‘too expensive’, cost per se is relative. A well-designed programme that has been optimised for the funds available is far cheaper than the costs of poorly designed surveys that provide inaccurate information and predictions, which are more likely to lead to poor management decisions.