Automatic classification of human values: Applying computational thinking to information ethics
Article first published online: 18 NOV 2010
Copyright © 2009 American Society for Information Science and Technology
Proceedings of the American Society for Information Science and Technology
Volume 46, Issue 1, pages 1–4, 2009
How to Cite
Fleischmann, K. R., Oard, D. W., Cheng, A.-S., Wang, P. and Ishita, E. (2009), Automatic classification of human values: Applying computational thinking to information ethics. Proc. Am. Soc. Info. Sci. Tech., 46: 1–4. doi: 10.1002/meet.2009.1450460345
- Issue published online: 18 NOV 2010
- Article first published online: 18 NOV 2010
This paper describes an ongoing interdisciplinary collaboration between researchers in information ethics and information retrieval who seek to automate the process of detecting and classifying values expressed in human communication. This effort is an example of computational thinking (Wing, 2006) in practice, as it involves applying computational techniques to the field of ethics, which has previously been thought of as a purely humanistic endeavor. As this paper explains, automatic classification has potential benefits relative to the surveys and manual content analysis that have previously been used to study human values, with significant promise to revolutionize both information ethics and artificial intelligence.
Historically, surveys have been used to examine the values of different individuals, groups, organizations, professions, and cultures (e.g., Inglehart, 2008; Rokeach, 1967; Schwartz, 1992). However, surveys have several limitations. First, in all human subjects research conducted under Institutional Review Board (IRB) protocols, surveys involve voluntary participation, leading to participation bias. Since not everyone who is asked to participate in a survey need do so, true random sampling is not feasible, making representative sampling highly unlikely, and opening the possibility that there are unmeasured systematic biases in terms of who does and does not participate. Second, as in all research that involves direct questioning, survey participants may not answer questions accurately, leading to response bias. Inaccurate answers may result from insufficient reflection, retrospective bias (in which subsequent events change past perceptions), self-deception, or conscious or subconscious decisions to withhold information or sugarcoat answers. Due to these limitations, it is problematic to rely entirely on surveys to understand human values and how they may impact the design and use of information technologies.
Content analysis provides an alternative (or supplement) to survey research on human values. Content analysis can be used to produce open- or closed-coding of human values in recorded communication, and can be undertaken at the word, phrase, sentence, multiple-sentence, paragraph, or document level. A trained coder can identify statements motivated by values and which values are present in those statements. A coder may be able to detect the presence of values that might have entered into the speech or writing of an individual at a subconscious level, or which an individual might not choose to express directly, allowing for more robust detection of values than in a survey. However, manual content analysis suffers from significant limitations and tradeoffs. First, manually coding text is time-consuming, making large-scale research projects costly (often prohibitively costly in less well funded research areas). Second, human coders are subject to various degrees of bias and inconsistency (for example, reflective adjustment of the coding process can sometimes be beneficial, but recoding previously coded documents can be even more time-consuming, and failing to recognize the need to recode earlier documents can yield inconsistent results). While much can be learned through manual content analysis, its costs and limitations restrict the utility of the method for large data sets.
Automatic content analysis can yield many of the benefits of content analysis while addressing some of the limitations (Cheng et al., 2008). Our research team has trained computers to both detect the presence of human values and to classify specific values, already producing some promising preliminary results. Specifically, we have focused on coding values related to net neutrality from transcripts of testimony given at public hearings. Human coding has detected the most frequently occurring values within this ongoing debate, as well as differences in the values of different stakeholders in the debate (Cheng & Fleischmann, 2009), and automated detection and classification tools can (to at least some degree) replicate this detection and classification using machine learning. This opens up the possibility of coding large corpora, including not just public hearings but also various forms of printed media, blogs, and even recorded speech (Oard, in press).
Given the potential to examine much larger and more representative corpora, even imperfect automatic classification techniques can be useful research tools. Indeed, when automatic detection and classification makes mistakes on individual cases, there can still be opportunities for macro analysis. For example, an unbiased automatic detection and/or classification tool that makes mistakes on individual cases that balance each other out could provide a reliable distributional analysis of the discourse across an entire corpus. Moreover, humans are imperfect as well, with an error rate that typically rises over time when faced with repetitive tasks. Machines, by contrast, are consistent. As the amount of data to be analyzed increases, the accuracy of careful human coders understandably decreases but the accuracy of automated techniques stays the same.
Thus, it is important to develop automatic classification tools for human values. Not only do such tools have the potential to serve an important role in information ethics research, but there may also be some potential for automatic classification of human values to play a role in artificial intelligence. We argue that the relationship between information ethics and information technology is a two-way street, as approaches such as information retrieval may facilitate information ethics inquiry, while information ethics in turn may be able to inform system design, especially in the field of artificial intelligence, to help ensure that humans and computers can successfully coexist and collaborate. If the goal of artificial intelligence is to create technologies that can “think” like humans, perhaps it's time for us to teach our machines to recognize our values. How else could they ever hope to begin developing values of their own? (“HAL, is that you?”)
This material is based upon work supported by the National Science Foundation under Grant No. IIS-0729459.
- 2009). Value perspectives in net neutrality: A content analysis of public hearings. 10th Annual International and Interdisciplinary Conference of the Association of Internet Researchers (AoIR), Milwaukee, WI. & (
- 2008). Advancing social science research by applying computational linguistics. Proceedings of the 2008 Annual Meeting of the American Society for Information Science and Technology. , , , & (
- 2008). Changing values among Western publics from 1970 to 2006. West European Politics 31(1-2), 130–146. (
- A Whirlwind Tour of Automated Language Processing for the Humanities and Social Sciences,” in Promoting Digital Scholarship: Formulating Research Challenges in the Humanities, Social Sciences, and Computation, Council on Library and Information Resources. (in press).
- 1967). The Rokeach Value Survey. Sunnyvale, CA: Halgren Tests. (
- 1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In Advances in Experimental Social Psychology, Volume 25, M. P.Zanna, ed. Orlando, FL: Academic Press, Orlando, pp. 1–66. (
- 2006). Computational thinking. Communications of the ACM, 49(3), 33–35. (