Two eye-tracking experiments investigated how and when pointing gestures and location descriptions affect target identification. The experiments investigated the effect of gestures and referring expressions on the time course of fixations to the target, using videos of human gestures and human voice, and animated gestures and synthesized speech. Ambiguous, yet informative pointing gestures elicited attention and facilitated target identification, akin to verbal location descriptions. Moreover, target identification was superior when both pointing gestures and verbal location descriptions were used. These findings suggest that gesture not only operates as a context to verbal descriptions, or that verbal descriptions operate as a context to gesture, but that they complement one another in reference resolution.