Publication bias arises when studies with favorable results are more likely to be reported than are studies with null findings. If this bias occurs in studies with single-subject experimental designs (SSEDs) on applied behavior-analytic (ABA) interventions, it could lead to exaggerated estimates of intervention effects. Therefore, we conducted an initial test of bias by comparing effect sizes, measured by percentage of nonoverlapping data (PND), in published SSED studies (n = 21) and unpublished dissertations (n = 10) on 1 well-established intervention for children with autism, pivotal response treatment (PRT). Although published and unpublished studies had similar methodologies, the mean PND in published studies was 22% higher than in unpublished studies, 95% confidence interval (4%, 38%). Even when unpublished studies are included, PRT appeared to be effective (PND M = 62%). Nevertheless, the disparity between published and unpublished studies suggests a need for further assessment of publication bias in the ABA literature.