• Dekel, O., & Shamir, O. (2009). Vox Populi: Collecting High-Quality Labels from a Crowd. COLT 2009.
  • Donmez, P., Carbonell, J., & Schneider, J. (2010). A probabilistic framework to learn from multiple annotators with time-varying accuracy. SIAM International Conference on Data Mining (SDM) (pp. 826837).
  • Efron, M., Organisciak, P., & Fenlon, K. (2011). Building Topic Models in a Federated Digital Library Through Selective Document Exclusion. Presented at the ASIS&T Annual Meeting, New Orleans, USA.
  • Efron, M., Organisciak, P., & Fenlon, K. (2012). Improving Retrieval of Short Texts Through Document Expansion. Presented at the ACM SIGIR 2012, Portland, USA.
  • Eickhoff, C., & Vries, A. P. (2012). Increasing cheat robustness of crowdsourcing tasks. Information Retrieval.
  • Golovchinsky, G., & Pickens, J. (2010). Interactive information seeking via selective application of contextual knowledge. Proceedings of the third symposium on Information interaction in context, IIiX '10 (pp. 145154). New York, NY, USA: ACM. doi:10.1145/1840784.1840806
  • Hsueh, P.-Y., Melville, P., & Sindhwani, V. (2009). Data quality from crowdsourcing: a study of annotation selection criteria. Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, HLT '09 (pp. 2735). Stroudsburg, PA, USA: Association for Computational Linguistics.
  • Jing Wang, Panagiotis G. Ipeirotis, & Foster Provost. (2011). Managing Crowdsourcing Workers. Presented at the Winter Conference on Business Intelligence, Utah.
  • Lease, M., & Kazai, G. (2011). Overview of the TREC 2011 Crowdsourcing Track (Conference Notebook). Text Retrieval Conference Notebook.
  • Matthijs, N., & Radlinski, F. (2011). Personalizing web search using long term browsing history (p. 25). ACM Press. doi:10.1145/1935826.1935840
  • Novotney, S., & Callison-Burch, C. (2010). Cheap, fast and good enough: Automatic speech recognition with non-expert transcription. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 207215). Presented at the HLT '10, Stroudsburg, USA.
  • Organisciak, P. (2012). An Iterative Reliability Measure for Semi-anonymous Annotators. Presented at the Joint Conference on Digital Libraries, Washington DC, USA.
  • Sheng, V. S., Provost, F., & Ipeirotis, P. G. (2008). Get another label? improving data quality and data mining using multiple, noisy labelers. Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '08 (pp. 614622). New York, NY, USA: ACM.
  • Snow, R., O'Connor, B., Jurafsky, D., & Ng, A. Y. (2008). Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08 (pp. 254263). Stroudsburg, PA, USA: Association for Computational Linguistics.
  • Urbano, J., Marrero, M., Martín, D., Morato, J., Robles, K., & Lloréns, J. (2011). The University Carlos III of Madrid at TREC 2011 Crowdsourcing Track. Retrieved from
  • Wallace, B., Small, K., Brodley, C., & Trikalinos, T. (2011). Who should label what? Instance allocation in multiple expert active learning. Proceedings of the SIAM International Conference on Data Mining (SDM).
  • Welinder, P., & Perona, P. (2010). Online crowdsourcing: Rating annotators and obtaining cost-effective labels. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 2532). Presented at the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE.
  • Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., & Movellan, J. (2009). Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems, 22, 20352043.