Reviewing, Categorizing, and Analyzing the Literature on Black–White Mean Differences for Predictors of Job Performance: Verifying Some perceptions and Updating/Correcting Others

Authors


  • We thank the editor, two anonymous reviewers, Amy Hooper, and Barbara Bobko for helpful comments on earlier versions of this work.

Philip Bobko, Department of Management, Gettysburg College, Gettysburg, PA 17325; pbobko@gettysburg.edu.

Abstract

In both theoretical and applied literatures, there is confusion regarding accurate values for expected Black–White subgroup differences in personnel selection test scores. Much confusion arises because empirical estimates of standardized subgroup differences (d) are subject to many of the same biasing factors associated with validity coefficients (i.e., d is functionally related to a point-biserial r). To address such issues, we review/cumulate, categorize, and analyze a systematic set of many predictor-specific meta-analyses in the literature. We focus on confounds due to general use of concurrent, versus applicant, samples in the literature on Black–White d. We also focus on potential confusion due to different constructs being assessed within the same selection test method, as well as the influence of those constructs on d. It is shown that many types of predictors (such as biodata inventories or assessment centers) can have magnitudes of d that are much larger than previously thought. Indeed, some predictors (such as work samples) can have ds similar to that associated with paper-and-pencil tests of cognitive ability. We present more realistic values of d for both researcher and practitioner use. Implications for practice and future research are noted.

Ancillary