SEARCH

SEARCH BY CITATION

REFERENCES

  • Alonso, O. and Mizzaro, S. (2009) Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment. In Proc SIGIR'09, ACM, New York, NY.
  • Bennett, P. N., Chickering, D. M. et al. (2009) Learning consensus opinion: mining data from a labeling game. In Proc. WWW '09. ACM, New York, NY pp 121–130.
  • Bernstein, M. S., Little, G. et. al. (2010) Soylent: a word processor with a crowd inside. In Proc UIST'10. ACM, New York, NY, pp 313–322.
  • Csikszentmihalyi, M. (1991). Flow: The psychology of optimal experience, Harper Perennial Press
  • Eickhoff, C., Harris, C. G., de Vries, A.P., and P. Srinivasan. (2012) Quality through flow and immersion: gamifying crowdsourced relevance assessments. In Proc. SIGIR 2012, ACM, New York, NY.
  • Finin, T., W. Murnane, et al. (2010). Annotating named entities in Twitter data with crowdsourcing. In Proc. NAACL HLT 2010, Association for Computational Linguistics, Stroudsburg, PA, USA. pp 80–88.
  • Franklin, M., D. Kossmann, et al. (2011). CrowdDB: Answering queries with crowdsourcing. In Proc. SIGMOD (2011). ACM, New York, USA. pp 61–72.
  • Grady, C. and M. Lease (2010). Crowdsourcing document relevance assessment with Mechanical Turk, ACL, Stroudsburg, PA, USA
  • Hacker, S. and L. von Ahn (2009). Matchin: eliciting user preferences with an online game, In Proc CHI'09, ACM. New York, USA, pp 1207–1216.
  • Kumar, A. and M. Lease (2011). Learning to rank from a noisy crowd. In Proc SIGIR'11, ACM, New York, pp 1221–1222.
  • Lancaster, F. W. and Warner, A. J. Information Retrieval Today. Information Resources Press, Arlington, VA, USA, 1993.
  • Lieberman, H., D. Smith, et al. (2007). Common Consensus: a web-based game for collecting commonsense goals. In Proc. IUI'07, ACM, New York, NY.
  • Little, G., Chilton, L. B., et. al. (2010). TurKit: human computation algorithms on mechanical turk. In Proc. UIST '10. ACM, New York, NY, pp. 5766.
  • Luon, Y., C. Aperjis, et al. Rankr: A Mobile System for Crowdsourcing Opinions.
  • Ma, H., R. Chandrasekar, et al. (2009). Improving search engines using human computation games, In Proc. HCOMP'09, ACM, New York, NY.
  • McCann, R., A. Doan, et al. (2003). Building data integration systems: A mass collaboration approach, AAAI, New York, NY.
  • McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world, Penguin Pr.
  • Milne, D., D. M. Nichols, et al. (2008). A competitive environment for exploratory query expansion, In JCDL'08, ACM, New York, NY.
  • Naveed, N., T. Gottron, et al. (2011). Searching Microblogs: Coping with Sparsity and Document Quality. In Proc. CIKM'11. ACM, New York, NY pp 183–188.
  • Ng, V. (2007). Semantic class induction and coreference resolution. In Proc. ACL'07, ACL, Stroudsburg, PA, USA pp. 536543.
  • Nowak, S. and S. Rüger (2010). Reliable Annotations via Crowdsourcing. In Proc. MIR'10. ACM, New York, NY. pp. 557566.
  • Paiement, J. F., J. G. Shanahan, et al. (2010). Crowd Sourcing Local Search Relevance. In Proc. CrowdConf 2010. ACM, New York, NY.
  • Ravi, S. and K. Knight (2009). Minimized models for unsupervised part-of-speech tagging, In Proc. ACL'09, ACL, Stroudsburg, PA, USA. pp 504–512.
  • Robson, C., S. Kandel, et al. (2011). Data collection by the people, for the people. In Proc. CHI'11. Vancouver, BC, Canada, ACM, New York, pp 25–28.
  • Su, Q., D. Pavlov, et al. (2007). Internet-scale collection of human-reviewed data. In Proc. WWW '07. Banff, Alberta, Canada, ACM, New York. NY, pp 231–240.
  • Urbano, J., J. Morato, et al. (2010). Crowdsourcing preference judgments for evaluation of music similarity tasks. In Proc. CSE'10, ACM, New York, NY. pp 9–16.
  • von Ahn, L. and L. Dabbish (2004). Labeling images with a computer game, In Proc. CHI'04, ACM, New York, NY.
  • von Ahn, L., M. Kedia, et al. (2006). Verbosity: a game for collecting common-sense facts, In Proc. CHI'06, ACM, New York, NY, pp 75–78.
  • Whitehill, J., P. Ruvolo, et al. (2009). “Whose vote should count more: Optimal integration of labels from labelers of unknown expertise.” Advances in Neural Information Processing Systems 22: pp 20352043.
  • Yan, T., V. Kumar, et al. (2010). CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones, In Proc. MobiSys'10, ACM, New York, NY.
  • Zuccon, G., T. Leelanupab, et al. (2011). Crowdsourcing Interactions. In CSDM'11. ACM, New York, NY p 35.