Pseudofactorialism is defined as ‘the invalid statistical analysis that results from the misidentification of two or more response variables as representing different levels of an experimental variable or treatment factor. Most often the invalid analysis consists of use of an (n + 1)-way anova in a situation where two or more n-way anovas would be the appropriate approach’. I and my students examined a total of 1362 papers published from the 1960s to 2009 reporting manipulative experiments, primarily in the field of ecology. The error was present in 7% of these, including 9% of 80 experimental papers examined in 2009 issues of Ecology and the Journal of Animal Ecology. Key features of 60 cases of pseudofactorialism are tabulated as a basis for discussion of the varied ways and circumstances in which the error can occur. As co-authors, colleagues, editors and anonymous referees and editors who approved them for publication, a total of 459 persons other than the senior authors shared responsibility for these 60 papers. Pseudofactorialism may sometimes be motivated by a desire to test whether different response variables respond in the same way to treatment factors. Proper procedures for doing that are briefly reviewed. A major cause of pseudofactorialism is the widespread failure in statistics texts, primary literature and documentation for statistics software packages to distinguish the three major components of experimental design – treatment structure, design structure, response structure – and clearly define key terms such as experimental unit, evaluation unit, split unit, factorial and repeated measures. A quick way to check for the possible presence of the pseudofactorialism is to determine whether the number of valid experimental units in a study is smaller than (i) the error degrees of freedom in a multi-way anova; or (ii) the total number of tallies (N) in a multi-way contingency table. Such situations also can indicate the commission of pseudoreplication, however.