SEARCH

SEARCH BY CITATION

References

  • Andersson, R., Ferreira, F., & Henderson, J. M. (2011). The integration of complex speech and scenes during language comprehension. Acta Psychologica, 137, 208216.
  • Baayen, R., Davidson, D., & Bates, D. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390412.
  • Castelhano, M., & Heaven, C. (2010). The relative contribution of scene context and target features to visual search in real-world scenes. Attention, Perception and Psychophysics, 72, 12831297.
  • Coco, M. I., & Keller, F. (2010). Sentence production in naturalistic scene with referential ambiguity. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 10701075). Portland, OR: Cognitive Science Society.
  • Cristino, F., Mathot, S., Theeuwes, J., & Gilchrist, I. (2010). Scanmatch: A novel method for comparing fixation sequences. Behaviour Research Methods, 42, 692700.
  • Durbin, R., Eddy, S., Krogh, A., & Mitchison, G. (2003). Biological sequence analysis: Probabilistic models of proteins and nucleid acids. Cambridge, UK: Cambridge University Press.
  • Findlay, J. M., & Gilchrist, I. D. (2001). Visual attention: The active vision perspective. In M. Jenkins & L. Harris (Eds.), Vision and attention (pp. 83103). New York: Springer-Verlag.
  • Foulsham, T., & Underwood, G. (2008). What can saliency models predict about eye movements? Spatial and sequential aspect of fixations during encoding and recognition. Journal of Vision, 8(2), 117.
  • Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakers referential intentions to model early cross-situational word learning. Psychological Science, 20(5), 578585.
    Direct Link:
  • Gleitman, L., January, D., Nappa, R., & Trueswell, J. (2007). On the give and take between event apprehension and utterance formulation. Journal of Memory and Language, 57, 544569.
  • Griffin, Z., & Bock, K. (2000). What the eyes say about speaking. Psychological science, 11, 274279.
    Direct Link:
  • Gusfield, D. (1997). Algorithms on strings, trees and sequences: Computer science and computational biology. Cambridge, UK: Cambridge University Press.
  • Hayhoe, M. (2000). Vision using routines: A functional account of vision. Visual Cognition, 7, 4364.
  • Henderson, J. M. (2007). Regarding scenes. Current Directions in Psychological Science, 16(4), 219-222.
    Direct Link:
  • Humphrey, K., & Underwood, G. (2008). Fixation sequences in imagery and in recognition during the processing of pictures of real-world scenes. Journal of Eye Movement Research, 2(2), 115.
  • Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(1), 14891506.
  • Knoeferle, P., & Crocker, M. W. (2006). The coordinated interplay of scene, utterance and world knowledge. Cognitive Science, 30, 481529.
  • Land, M. F. (2006). Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research, 25, 296324.
  • Land, M. F., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28, 13111328.
  • Malcolm, G. L., , & Henderson, J. (2010). Combining top-down processes to guide eye movements during real-world scene search. Journal of Vision, 10(2) () 111.
  • Neider, M.B., Zelinsky, G. (2006). Scene context guides eye movements during visual search. Vision Research, 46, 614621.
  • Potter, M. (1976). Short-term conceptual memory for pictures. Journal of Experimental Psychology: Human Learning and Memory, 2, 509522.
  • Prasov, Z., & Chai, J. Y. (2010). Fusing eye gaze with speech recognition hypotheses to resolve exophoric references in situated dialogue. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (pp. 471481). Cambridge, MA: Association for Computational Linguistics.
  • Qu, S., & Chai, J. (2007). An exploration of eye gaze in spoken language processing for multimodal conversational interfaces. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (pp. 284291). Rochester: Association for Computational Linguistics.
  • Russell, B., Torralba, A., Murphy, K., & Freeman, W. (2008). Labelme: A database and web-based tool for image annotation. International Journal of Computer Vision, 77(1–3), 151173.
  • Schmidt, J., & Zelinsky, G. (2009). Search guidance is proportional to the categorical specificity of a target cue. Quarterly Journal of Experimental Psychology, 62(10), 19041914.
  • Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K., & Sedivy, J. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 632634.
  • Torralba, A., Oliva, A., Castelhano, M., & Henderson, J. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 4(113), 766786.
  • Vo, M., & Henderson, J. (2010). The time course of initial scene processing for eye movement guidance in natural scene search. Journal of Vision, 10(3), 113.
  • Yang, H., & Zelinsky, G. (2009). Visual search is guided to categorically-defined targets. Vision Research, 49, 20952103.
  • Yu, C., & Ballard, D. (2007). A unified model of word learning: Integrating statistical and social cues. Neurocomputing, 70, 21492165.