Newborn infants appear to possess an innate bias that guides preferential orienting to and tracking of human faces. There is, however, no clear agreement as to the underlying mechanism supporting such a preference. In particular, two competing theories (known as the ‘structural’ and ‘sensory’ hypotheses) conjecture fundamentally different biasing mechanisms to explain this behavior. The structural hypothesis suggests that a crude ‘3-dot’ representation of face-specific geometry is responsible for the exhibited preference. By contrast, the sensory hypothesis suggests that face preference is the product of several generic visual preferences for qualities like dark/light vertical asymmetry, horizontal symmetry, and high contrast. To complement existing empirical results, the current study describes a computational investigation of how well both proposals actually support face detection in cluttered natural scenes. The results demonstrate that both models are capable of locating faces effectively, but that the model suggested by the ‘sensory hypothesis’ is more selective than the model suggested by the ‘structural hypothesis’.