## Introduction

Statistical errors are common across a wide range of disciplines and even in the most prestigious journals (e.g., Good and Hardin 2003, García-Berthou and Alcaraz 2004, Ionnidis 2005, Strasak et al. 2007, Fernandes-Taylor et al. 2011). One study concluded that approximately 44% of the articles reporting statistics in a high quality medical journal had errors (Glantz 1980). Ecological research isn't immune from statistical mistakes, misconceptions, and miscommunications. Although a few of these pitfalls are related to errors in calculations, the majority of statistical pitfalls are more insidious and result from incorrect logic or interpretation despite correct numerical calculations. Such statistical errors can be dangerous, not because a statistician might come along and hand out a demerit, but because they lead to incorrect conclusions and potentially poor management and policy recommendations.

Over two decades ago, a paper identifying the 10 most common statistical errors from a journal editor's perspective was published in the Bulletin of the Ecological Society of America (Fowler 1990). We revisit and add to the discussion from our own perspective. We have nearly a century of combined experience in statistical consulting and collaboration across a wide range of ecological topics. We note that (1) some errors, identified by Fowler over 20 years ago, are still prevalent and, in our experience, these weak statistical practices crop up over and over again. Two decades of the same mistakes suggests that the message needs to be repeated. We outline these frequent misapplications of statistics and provide brief, simple solutions. We further contribute by (2) bringing the perspective of consulting and collaborating statisticians who often identify erroneous scientific and statistical logic that infects projects long before the publication stage evaluated by Fowler (1990). Finally, we (3) highlight new sources of opportunity and common confusion that have arisen with some of the key statistical advances over the past 20 years.

Counting up statistical mistakes is both fun and shocking but rarely constructive. Instead, we focus on identifying solutions. We choose in this manuscript to de-emphasize details of the application of statistical tools. We instead focus on statistics fundamentals, probabilistic thinking, and the development of correct statistical intuition. The following is a simple list of what are, in our estimation, the most common and often easily avoidable statistical pitfalls from a non-random, somewhat biased, but extensive sample of projects, questions, and draft manuscripts. The list is loosely organized into four chronological categories: setting up an analysis, experimental design, application of statistics, and interpretation of statistical tests and models. Within each category the pitfalls, presented in no particular order, are intended to serve as a reference or reminder when conducting or reviewing various stages of analysis. We necessarily begin with issues that reflect gaps in statistical thinking and basic statistical practices. These are, by far, the most common and easily avoidable mistakes and are relevant to nearly all statistical analyses. We then describe common errors and solutions for more specific yet also common situations.