SEARCH

SEARCH BY CITATION

This essay began as a guidance document for Wildlife Society Bulletin (WSB) Associate Editors (AEs). Colleagues who reviewed an early draft mentioned that the manuscript also had important implications for Reviewers, so I revised it accordingly. More recently, since the early whitepaper version has circulated among AEs, Reviewers, and other people, numerous colleagues have suggested I revise the manuscript to include the role of Authors. This document is the outcome of that process.

Authors

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

Scientific journals exist because Authors write manuscripts and submit them for publication. Thus, Authors are the most important part of a scientific journal. Authors of scientific manuscripts have a huge and multifaceted burden. As wildlife scientists, they have been involved in collecting data, usually in remote locations and difficult conditions, analyzing data using a bewildering array of analytical techniques, and interpreting what those data mean. Then, when it comes time to share their research results with the rest of the world, wildlife scientists become writers. And not just any old kind of writer, either. They become a scientific writer, and therein lies a large part of that huge and multifaceted burden. This burden involves writing with accuracy, precision, clarity, and brevity (inasmuch as possible), in a highly rigid format that is nearly inflexible and almost completely unforgiving. Scientific writing is one of the most difficult forms of writing for an author to master. It takes work. In fact it takes a great deal of work. And it is not for the faint of heart.

Obligations of Authors

If we start with the assumption that an Author has chosen WSB (or any other journal, for that matter) as an outlet for their paper, a set of obligations comes into play. Some obligations are mechanical, such as comply with style-format conventions. Other obligations are ethical, such as do not misrepresent results, and do not cite literature you have not read. Other obligations are subtle, yet critical, such as do not discuss concepts, ideas, or theories that are beyond the inferential bounds of your data. Understanding these obligations immediately gives an Author a leg up on navigating the peer-review process.

Another obligation of an Author is to write clearly. Say what you mean. Mean what you say. It sounds easy, but it is not. Even the most basic, descriptive studies have complex biological and-or ecological factors at their core. Distilling this complexity into simple declarative sentences is crucial for communicating the thoughts in your mind to the eyes of the reader.

Writing clearly usually requires rewriting. Revising a manuscript is both an extensive and an intensive process. As an Author or co-author of more than 100 scientific papers, I can safely say that most of the manuscripts that became those papers went through anywhere from 8 to >12 revisions before being submitted. I am sure there are colleagues out there who go through far fewer revisions in the process of getting a manuscript submitted. I hope to meet them someday.

Authorship

Over time, the number of Authors on a scientific paper has steadily increased. I constantly marvel at the single-author papers written in the distant past by naturalists and ecologists such as Joseph Grinnell, Paul Errington, Carl Koford, G. E. Hutchinson and others. Today, the collaborative and interdisciplinary nature of science, wildlife science included, requires teams of researchers to tackle important questions. For example, even a relatively simple study of geographic variation in a terrestrial vertebrate might require a molecular ecologist (or at least a lab technician), GIS expert, a field collector (or graduate student), and, perhaps, a statistician who knows something about Bayesian analysis. The contribution of each collaborator is crucial to the success of even this relatively simple project mentioned here. Without the help of even one of those collaborators, such a project would probably be impossible to complete successfully. Thus, in my view, co-authorship for all 4 collaborators is appropriate in such a case. The litmus test for co-authorship of a scientific paper is framed by the simple question: “Could I have done this project without that person's help?” If the answer is “no,” then that person should be offered the opportunity to be a co-author. They should also, of course, be involved in the process of writing and revising the manuscript.

Style-format conventions

One of the easiest ways for Authors to endear themselves to an editorial staff of a journal, as well as to Reviewers and AEs, is to follow the style-format conventions of the particular journal to which they are submitting their manuscript. Quite frankly, I am astonished at how many Authors seem to overlook basic aspects of style-format conventions when they prepare their manuscripts. To me, failure to follow style-format conventions of a journal is an indication of potential carelessness in other aspects of the manuscript, including analysis and interpretation of data. It is also an indication that the manuscript might have been flipped to the current journal from a journal where it was previously rejected. During my tenure as both Editor-in-Chief and AE of The Journal of Wildlife Management, it was distressingly common to see submitted manuscripts with vestiges, if not global, style-format conventions from other journals. I have therefore implemented a policy where submitted manuscripts that significantly depart from basic WSB style-format will be rejected without review. As AEs, I encourage AEs to make authors comply with the basic elements of WSB style-format for the revised manuscripts they handle. The WSB editorial office has developed an excellent template for Authors to use so they can easily follow WSB style-format while they prepare their manuscripts. There is no excuse for Authors to depart from the basic style-format conventions of WSB when they submit a manuscript to this journal.

Time to submit?

I have often had graduate students and early career colleagues ask me, when is the right time to submit a manuscript, or how do you know a manuscript is ready to submit? Two rules of thumb seem to apply here. Rule 1: Work the manuscript over until you cannot improve it anymore, and then send it to at least 2 trusted colleagues to review. Rule 2: After receiving review comments from your trusted colleagues, incorporate their comments (assuming such comments are useful and appropriate) and then continue to work the manuscript over until you (and your co-authors) cannot improve it any more. When you reach this point, it is time to submit your manuscript. Inclusion of a cover letter that explains that your manuscript is being exclusively submitted to WSB, and that no part of the manuscript is under consideration for publication elsewhere is de rigueur. When your manuscript goes to a journal such as WSB, you will interact with the Editorial Staff, who will deal with processing details, and eventually you will hear from an AE about the fate of your already long-traveled manuscript.

The Job of an Associate Editor

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

The AE fills a unique and critical role in scientific publishing. This is because leading scientific journals always attract more manuscripts than a single Editor can handle. For example, with a revitalized year of online publishing behind us, new manuscripts are being submitted to WSB at the rate of about 1 per business day. This means we can expect to receive >250 manuscripts to process in a calendar year. There is no way a single Editor could even think about handling such a load of manuscripts. Also, the range of technical expertise represented in manuscripts submitted to WSB exceeds the ability of a single Editor to be able to judge their merit. Therefore, as in any society, a division of labor must be organized, and AEs represent a crucial aspect of how such editorial labor is divided.

The Job of a Reviewer

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

In the peer-review process, Reviewers are ground-zero when it comes to analysis of a manuscript and determining whether it is publishable. Reviewers are selected by editorial staffs of journals because they have developed reputations for being expert on a topic that is central to a manuscript that was submitted for publication. Because most peer-reviews are confidential, Reviewers have an unenviable position of being asked to work hard to assess the merits and limitations of a manuscript; yet, at the same time, they work in the shadows of confidentiality, which is often essential to a frank and honest review of a manuscript.

Many of the points in this commentary that pertain to AEs are also germane to Reviewers. First and foremost, a Reviewer needs to be able to give a fair and honest assessment of a manuscript. Far too often, personal biases, rivalry, and other aspects of the human condition have the potential to influence Reviewers and AEs. When this is the case, Reviewers and AEs should recuse themselves from handling the manuscript in question. Secondly, and also important, is that Reviewers should not include language about whether they think the manuscript being reviewed is publishable in their comments to the Authors. If you tell an author that you think their paper is publishable, it might cause problems down the road, especially if the paper ends up being rejected.

Guiding the review process

When reviews are completed, the AE becomes the front line in the process of analyzing and interpreting the Reviewer comments and recommendations. There is no “one size fits all” approach to how an AE should guide the review process, other than striving to exercise good judgment and fairness. Usually, when both Reviewers are in agreement with their recommendations about a manuscript, the editorial decision by the AE more often than not follows what the Reviewers recommend. However, AEs have an obligation to look beyond Reviewer comments and recommendations, and to make their own editorial judgment. This can be done early in the review process, whereby an AE has the authority to reject a manuscript either before it is assigned to Reviewers, or shortly thereafter so that the Reviewers' time is not wasted. Such cases should be limited to manuscripts that have a fatal flaw in study design or analysis. It is not appropriate for an AE to automatically reject a manuscript without review because they think subject matter or other topical areas covered in the manuscript are not appropriate for WSB; such a decision is the purview of the Editor-in-Chief.

Although unusual, 2 Reviewers may recommend “minor revisions” for a manuscript where the AE then uncovers a fatal flaw in study design or data analysis. In a situation like this, the recommendation of the AE should be to reject the manuscript. When reviews are mixed—one Reviewer recommends “minor changes” and the other Reviewer recommends “reject”—the AE should take on the role of being a third Reviewer, and sort out the editorial issues in the interest of fairness to the author. Seeking a completely independent third review, while potentially useful, more often than not causes unnecessary delays in the review process and is not usually very informative.

Minor versus major revisions

These are clearly relative terms that mean different things to different people. My philosophy is that WSB manuscripts that need “minor” revisions are those where clarity of writing, style-format conventions, and other relatively easy fixes can be made by Authors in a day or two. Manuscripts that require “major” revisions are those where various sections need extensive rewriting and revised interpretation. Manuscripts that need major revisions often have problems, such as incomplete Methods sections, Results that are not presented clearly, Discussions that rehash Results instead of actually discussing Results, and Management Implications that are an extension of the Discussion rather than actually describing Management Implications. Many of these problems are discussed in more detail in the sections below. Such manuscripts often have serious deviations from style-format conventions, poor organization of text, poor construction of tables and figures, and problems with Literature Cited sections. However, the redeeming quality of manuscripts that need major revisions should be that the data and analyses are new, interesting, and meet the WSB masthead mission statement of “Integrating Wildlife Science and Management.” I strongly discourage AEs from seeking additional reviews of revised manuscripts, unless there are serious and extenuating circumstances. If additional data analyses or extensive re-analyses are needed, then my philosophy is to reject the manuscript and suggest that the Authors perform such analyses (along with making the other necessary editorial fixes, which are probably legion) and let them know that they will need to deal with the manuscript as a new submission to WSB and not as a revision.

Another critical role that AEs must accept is—especially in the case of manuscripts that require major revisions—to help the corresponding author navigate Reviewer comments and recommendations that are conflicting and cross-purposes. For example, it is not unusual for a Reviewer to make some sort of editorial recommendation, only to have the comments of the second Reviewer contradict it. This is a situation where the AE must step in and reconcile the issue one way or another for the author. Simply telling a corresponding author to “make all recommended changes by the Reviewers” may not be the best way to help them as they struggle to revise their manuscript.

There are also situations where AEs will encounter resistance from Authors when it is recommended that they reduce the lengths of various parts of their manuscript or make other necessary editorial changes that are not completely agreeable to Authors. Sometimes, Authors will fall back on the “well, we are paying page charges” excuse to not reduce the length of their manuscript. In these situations, it is best that the AE offers the corresponding author the opportunity to withdraw their manuscript and submit it elsewhere.

Embargo policy

The Wildlife Society has an embargo policy for all 3 of the scientific journals that it publishes. This policy is intended to keep Authors from making press releases or calling press conferences regarding distribution to print, electronic, and broadcast media outlets of data and-or unpublished manuscripts that are in review or in editorial production after being accepted for publication. The intent of such a policy is to ensure that high standards of peer-review are maintained and scientific integrity of the published manuscript is ensured. The appropriate time for an author to make a press release about content or data in a manuscript is after it appears in the Early View section of the WSB website. For more details about this embargo policy, please consult the Journal of Wildlife Management instructions for Authors.

Timing and deadlines

We strive to obtain initial decisions on manuscripts (reject or revise) to Authors within 6–8 weeks of assignment of AEs and Reviewers. I realize that this is optimistic and might be impossible to achieve in some circumstances. However, it is my philosophy that we should offer Authors an informed and reasonable decision about their manuscripts in as short a time as possible. The timing of deadlines for returning reviews and revisions in ScholarOne is far too liberal (long) in my view, and I intend to work to change (shorten) them. Of course, you are strongly encouraged to beat the ScholarOne deadlines whenever possible. As professional over-achievers—if you are writing scientific manuscripts for publication, have signed up to be an AE of a leading scientific journal, or have been selected to review a manuscript by such a journal, you are by definition an over-achiever—all of you have gotten where you are in life by beating deadlines and exceeding expectations.

Recurrent Problems and Tactical Solutions

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

Over the years, I have observed that there are recurrent problems unique to the basic sections of scientific manuscripts that I review and edit. In this section, I discuss many of these issues in relation to the places in a manuscript where they typically occur. The material in this section is germane to Authors, AEs, and Reviewers.

Abstract

One purpose of a well-written abstract is to make a person want to read the rest of the published paper. After they scan the Table of Contents, the Abstract is the next part of a paper the reader will encounter as they drill down into the content of an issue of a journal. More often than not, it is also the last part of a paper that they will read before going on to look for other articles.

In addition to summarizing the salient results of a study, an Abstract should provide as much detail as possible, given the constraints of space. Again, this is because an Abstract is the only part of a scientific paper that most people will read, and because it is the part of a paper that abstracting services and computerized bibliographic databases will capture and compile. Therefore, it is important to put as much data as possible into an Abstract of a scientific paper. This means Authors should report actual comparative or experimental effect sizes and other important metrics (when possible) along with research hypotheses (when possible) and sharp interpretations of what the data mean. Statistical coefficients such as P-values, df, F, t, AIC, etc., typically do not belong in an Abstract. Sometimes, metrics of model performance, such as the amount of variation explained in a dependent variable by one or more independent variables, are appropriate. In any case, the importance of reporting data instead of statistics in an Abstract cannot be over emphasized.

Introduction

Two common problems with Introductions of scientific manuscripts are 1) they are too long, and 2) they fail to provide appropriate information. Far too often, both of these problems can be found in the same Introduction of a submitted manuscript. Authors typically do a good job of describing the objectives of their study in the Introduction, and this is fine. Asking them to also include the purpose of their study—or why they did it in the first place—is also a good idea when Authors overlook this important point. Also, where possible, it is a good idea to encourage Authors to add appropriate language in their Introductions that describes the research hypotheses that are at the core of their study. Research hypotheses are simply questions related to whether, how, or why certain phenomena occur, and typically relate to testing hypotheses through deduction. Research hypotheses are not “null” hypotheses, whereby an investigator poses a simple “there is no statistical difference between or among various groups…” statement. Research hypotheses generate questions that are usually much more interesting and informative than null hypotheses.

Admittedly, a large number of the topics addressed in wildlife science manuscripts are descriptive; many wildlife studies are not structured around research hypotheses. However, I will argue that this seems to be changing. A surprising number of contemporary wildlife studies actually are rooted in research hypotheses, even if the author does not realize it. For example, a simple test of whether biological entities are or are not distributed randomly across space is a research hypothesis. An evaluation of whether body size of a terrestrial vertebrate increases (or decreases) along some kind of environmental gradient is also a research hypothesis, as is a long-term assessment of whether annual productivity of a grassland bird corresponds with seasonal precipitation. (For more details on research hypotheses in wildlife research, see Guthery 2008:18). Posing research hypotheses in the Introduction can also help maintain structure and organization in the Discussion section, as noted below.

Study Area

The Study Area is typically the section of a scientific manuscript least fraught with problems. Long lists of plant species can be problematic, but such details are also key information for many habitat studies. Today, electronic publishing formats allow opportunities for supplemental information, such as Global Positioning System locations of plots and transects, which can be filed in a permanent archive. Authors should be encouraged to do this wherever possible and appropriate.

Methods

The litmus test of a good Methods section in a scientific manuscript is “Can this study be replicated using only the language in the Methods section?” If the answer is no, then there are problems with Methods that will need to be rectified. Encouraging Authors to divide their Methods sections into subheadings, such as “Data Collection” and “Data Analysis” can be a useful way to keep the reader on track when slogging through complex material. These kinds of subheadings certainly help keep me on track when I read a manuscript.

Results

The most widespread and pernicious problem with many Results sections is that Authors emphasize reportage of statistics at the expense of biological and-or ecological relationships in their data. In other words, far too often, the statistical tail ends up wagging the biological dog, usually because Authors perform statistical analyses in a ritualistic manner instead of an analytical one. For example, the idea of “statistical significance” often seems to be more important to many Authors than the biological or ecological patterns in the data. Frequentist statistics are especially problematic because testing for statistical significance is often insignificant in and of itself (Johnson 1999).

Fortunately, there are ways to help Authors get around this dilemma. One of the most effective ways to bring biology and ecology to the forefront of a Results section is to encourage Authors to relegate statistical coefficients to their tables and figures, and keep the Ps, ts, dfs, Fs, AICs, etc., out of the text as much as possible. Although this is not always 100% possible, it is astonishing how the removal of vast quantities of in-text statistical coefficients can improve the readability of a manuscript. This is also the case with excessive use of acronyms in Results and other sections of manuscripts, as noted by Block (2012). Of course, there are cases where the results of analyses are limited to situations where they do not warrant a table or figure, and that is fine; statistical coefficients can be included in text in such situations. However, moving as many statistical coefficients out of text as possible will force an author to focus on reporting comparative or experimental effect sizes, magnitude of effects, actual metrics of observations and what the data really mean, rather than whether the data were “statistically significant.” Doing this can be tricky because in-text repetition of data that are then also reported in tables and figures should be discouraged. However, it is imperative that Authors strive to meet such an objective.

During the past decade or so, Information-Theoretic (IT) models that use information coefficients such as AIC have become extremely popular as an alternative to frequentist statistics. While IT models can be useful tools to draw inference from data, they are not without their potential drawbacks. This is because the IT approach compares sets of competing models and identifies the “best” model out of a set of models. This is fine on one level, but people who use the IT approach to data analysis run the risk of identifying a lousy model that is the so-called best model out of a set of even worse models. This is what can happen when analytical approaches are relative rather than absolute. One way around this dilemma with IT models is to include some metric of model performance, such as percentage of variance explained, goodness-of-fit, classification success, or some other measure that will give readers an idea of how well a certain model works, rather than just an indication that it is the best of the bunch under consideration. Independent testing of so-called “best” models with independent data is strongly encouraged, and is something that I hope we see more of in the future.

Regardless of the analytical approach used, the successful strategy is to report overall emergent trends, patterns, and contrasting elements of the data in Results and to do so in a context where biology and ecology are emphasized. This is both a challenge and an opportunity in scientific writing; however, the best scientific papers reflect this kind of writing. Encourage Authors to adopt the philosophy that statistics are a backstop meant to bolster or support results, rather than treat them as results unto themselves. Remember, data, not statistics, are at the root of scientific progress.

Complete coverage of all the potential pitfalls and problems of statistical analyses in wildlife science is beyond the scope of this commentary. Some analytical details regarding these issues, which should be helpful, are covered in an Appendix by Gary White located at the end of this commentary. Finally, it is important to note there are contemporary examples in the wildlife science literature that illustrate that it is possible to publish papers that make strong inferences without use of either frequentist statistics or IT models (Guthery et al. 2005, Rader et al. 2011).

Discussion

Two red-flag problems with many Discussion sections in scientific manuscripts are 1) Authors fall into the trap of rehashing their results, and-or 2) Authors speculate about phenomena that are outside the inferential bounds of their data. Identifying these kinds of problems, and showing Authors how to eliminate them, are excellent editorial tactics for creating readable and interesting Discussion sections. Furthermore, urge Authors to go back to the objectives and research hypotheses outlined in their Introduction and discuss the extent to which they were met (or were not met, and why). Finally, a well-written Discussion should, of course, address what the Results mean in relation to those who did previous work on a similar topic.

Management Implications

The problem with many Management Implications sections is that they are not about Management Implications (Guthery 2011). Far too many Authors think that the Management Implications section is an opportunity to continue the Discussion, with frequent interjections of the word “management” into the text. It is not. A Management Implications section should be a pithy and to-the-point paragraph that zeroes in on the implications of the study for managers, however managers are defined. Some manuscripts, such as those that describe a new research tool or technique, will obviously have little or nothing in the way of actual management implications to report. This is fine. However, if this is the case, then do not include a Management Implications section in such a manuscript.

Literature Cited

Literature Cited sections offer unlimited opportunity for sloppy scholarship, and it shows in far too many manuscripts submitted for publication. I have lost count of how many times I was reading or editing a manuscript and came across an in-text citation that looked interesting, only to find it absent in the Literature Cited section. Granted, a fine-toothed-comb editorial treatment of both in-text citations and Literature Cited sections is a job for the copy editor (should the manuscript be accepted for publication) and not the AE or Reviewer. However, when you see Literature Cited sections that are obviously a mess, or stumble across a text citation that is not in Literature Cited, it is well within your purview as an AE or Reviewer to admonish Authors to clean up their act and treat the overall body of work they cite with scholarly rigor. Not only is this a basic tenet of following style-format conventions of a particular journal, it is also an indication of attention to detail and an indicator of other potential organizational or conceptual problems with the manuscript. And, perhaps most importantly, Authors need to understand and appreciate that it is the scientific literature from the past that helps make their paper relevant to the future. Therefore, Authors should give the literature they cite the respect that it deserves.

Supplementary materials

As noted in the Study Area section above, permanent electronic databases provide a unique opportunity for Authors to archive detailed information about their research to an extent that was simply not possible in the past. Encourage Authors to use these resources where appropriate. It may also help reduce manuscript length.

In Summary

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

Authors, Editors, and Reviewers are components of a 3-legged stool that supports the seat of a scientific journal. This metaphoric stool has 3 legs so that it can remain stable when it is on the rocky and sometimes turbulent ground of peer-review. Although peer-review can be a maddening and imperfect process, it is the best process we have when it comes to maintaining quality control and rigor in scientific publishing. Imagine if scientists replaced peer-review with press release as a means of communicating their research results. Chaos would ensue. And scientific progress would probably cease altogether.

Editing scientific and academic manuscripts can be both a rewarding and frustrating task that has been referred to as “…hand to hand combat” by at least one veteran Editor (Plotnik 1982:29). Writing and processing scientific manuscripts for publication, which includes deciding whether manuscripts will or will not be published, is an important intellectual activity that reflects peace and prosperity in our culture. Working with Authors to help them get their scientific results into print is the key element of professional service that AEs and Reviewers for WSB bring to the table. AEs and Reviewers sometimes have a difficult the job of telling Authors that their manuscript is being rejected, even though editorial policy dictates that the final decision on acceptability of a manuscript lies with the Editor-in-Chief. Thus, you are in an unenviable situation of having a tremendous editorial responsibility with relatively limited editorial authority. Unfortunately, that's the way the system works. Otherwise, it would be editorial anarchy.

Over the years, 2 books have profoundly influenced my writing and editorial efforts: How to Write and Publish a Scientific Paper by Robert Day (currently Day and Gastel 2006), and The Elements of Style by William Strunk and E. B. White (now in the fourth edition; Strunk and White 2000). If these 2 books are not in your personal library, they should be. The guidance and insight they provide are superb. Of course, learning about writing and editing from books only goes so far; there is no substitute for experience. And you will get that experience by writing papers or working as an AE or Reviewer for WSB.

ACKNOWLEDGMENTS

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

The following colleagues read this manuscript and provided comments that improved it: F. S. Guthery, Paul R. Krausman, M. L. Morrison, and G. C. White. I am especially grateful for the comment by PRK that an earlier draft pertained to Reviewers as much as it did to AEs, and I appreciate the comments of F. C. Bryant, S. E. Henke, M. J. Peterson, and several other colleagues, who mentioned that this guidance document should be revised to include material for Authors. I am grateful to GCW for providing the comments on quantitative analyses that became the Appendix. The C. C. Charlie Winn Endowed Chair for Quail Research supported much of my time spent writing and revising this paper. Any errors of logic, omission, or commission are, of course, my own.

LITERATURE CITED

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE
  • Block, B. 2012. Journal tweaks and pet peeves. Journal of Wildlife Management 76: 223.
  • Day, R. A., and B. Gastel. 2006. How to write and publish a scientific paper. Greenwood Press, Westport, Connecticut, USA.
  • Guthery, F. S. 2008. A primer on natural resource science. Texas A&M University Press, College Station, USA.
  • Guthery, F. S. 2011. Opinions on management implications. Wildlife Society Bulletin 35: 519522.
  • Guthery, F. S., A. R. Rybak, S. D. Fuhlendorf, T. L. Hiller, S. G. Smith, W. H. Puckett, Jr., and R. A. Baker. 2005. Aspects of thermal ecology of bobwhites in north Texas. Wildlife Monographs 159.
  • Johnson, D. H. 1999. The insignificance of statistical significance testing. Journal of Wildlife Management 63: 763772.
  • Plotnik, A. 1982. The elements of editing, a modern guide for editors and journalists. MacMillan, New York, New York, USA.
  • Rader, M. J., L. A. Brennan, F. Hernandez, and N. J. Silvy. 2011. Simulating northern bobwhite population responses to nest predation, habitat and weather. Journal of Wildlife Management 75: 582587.
  • Strunk, W., Jr., and E. B. White. 2000. The elements of style. Fourth edition. Pearson Education, Upper Saddle River, New Jersey, USA.

APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

  1. Top of page
  2. Authors
  3. The Job of an Associate Editor
  4. The Job of a Reviewer
  5. Recurrent Problems and Tactical Solutions
  6. In Summary
  7. ACKNOWLEDGMENTS
  8. LITERATURE CITED
  9. APPENDIX: SOME TECHNICAL POINTS ABOUT ANALYSES USING FREQUENTIST AND INFORMATION-THEORETIC MODELS IN WILDLIFE SCIENCE

Gary C. White, Department of Fish, Wildlife, and Conservation Biology, 239 Wagar, Colorado State University, Fort Collins, CO 80523, USA

In regard to P-values, there are still times when using classical hypothesis testing is justified. Actual experiments are one case, and presumably the experiment is not testing trivial hypotheses (or you have an even better reason to reject the manuscript). Second, goodness-of-fit (GOF) methods generally resort to hypothesis tests, so P-values need to be reported (although they should not incorrectly be interpreted, such that, because P > 0.05, the data fit!). The Bayesians have come up with more complex ways to do GOF, but they still end up reporting a P-value based on the likelihood of the data compared with the simulated distribution. More importantly, I generally try to assess whether the conclusions in the paper would be greatly changed had a different paradigm been used. If not, I'm generally inclined to not do the “hand-to-hand fighting” that would be required to make the author change. But there are cases where doing model selection via hypothesis tests (e.g., step-wise regression) is clearly inappropriate, given what we now know about these approaches, and so a fight ensues. Still, not all of us have embraced information-theoretic approaches to the point that we no longer ever do P-values, Burnham included. AIC asks a different question about the data than does a hypothesis test. As an example, you can have important (defined as P < 0.05) variation in a set of survival rates across time, yet AIC will pick the simpler constant model. The choice is a function of the number of parameters (degrees of freedom of the models) involved. So as an example, suppose you have 2 models, S(.) and S(t), which have EXACTLY the same AIC value (I'm ignoring the AICc correction, although the argument still holds). AIC will weight each of these equally (i.e., w = 0.5). However, as the degrees of freedom in the S(t) model increases (i.e., longer time string), the likelihood-ratio test between these models becomes more and more significant:

S(.) KS(t) Ktest dfE(χ2)P
12120.157299
13240.135335
14360.11161
15480.091578
165100.075235
176120.061969
187140.051181
198160.04238
1109180.035174
11110200.029253
11211220.024373
11312240.020341
11413260.017001
11514280.014228
11615300.011921
11716320.01
11817340.008396
11918360.007056
12019380.005935
  • thumbnail image

The above result that P = 0.157299 for a test between S(.) and S(2) is the reason you occasionally hear people say that AIC uses an α = 0.15 level. This statement is sort of true for a 1-df test, but obviously not true as the difference between the numbers of parameters in the 2 models increases. The value highlighted in yellow is 8 df between the 2 models, where AIC is operating at an α = 0.04238 level (the first value less than the magical 0.05). Further, P = 0.157299 is also the reason that you will see 95% confidence intervals overlap zero in a table that presents parameter estimates, while the single df covariate is included in the minimum AIC model. In summary, AIC is selecting the model that will provide the best predictions (bias vs. variance trade-off), rather than testing for whether there is a difference in survival across time. The 2 paradigms are evaluating different questions.