• data;
  • ethics;
  • information science


  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

Individuals can increasingly collect data about their habits, routines, and environment using ubiquitous technologies. Running specialized software, personal devices such as phones and tablets can capture and transmit users’ location, images, motion, and text input. The data collected by these devices are both personal (identifying of an individual) and participatory (accessible by that individual for aggregation, analysis, and sharing). Such participatory personal data provide a new area of inquiry for the information sciences. This article presents a review of literature from diverse fields, including information science, technology studies, surveillance studies, and participatory research traditions to explore how participatory personal data relate to existing personal data collections created by both research and surveillance. It applies three information perspectives—information policy, information access and equity, and data curation and preservation—to illustrate social impacts and concerns engendered by this new form of data collection. These perspectives suggest a set of research challenges for information science posed by participatory personal data.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

Self-quantification, mobile health, bio-hacking, self-surveillance, participatory sensing: Under a variety of names, practices, and motivations, these growing movements are encouraging people to collect data about themselves (Dembosky, 2011; Estrin & Sim, 2010; Hill, 2011). Ubiquitous digital tools increasingly enable individuals to collect very granular data about their habits, routines, and environments. Although forms of self-tracking have always existed, ubiquitous technologies such as the mobile phone enable a new scope and scale for these activities. These always-on, always-present devices carried by billions can capture and transmit users’ location, images, motion, and text input. Mobile health developers are creating applications to track health behaviors, symptoms and side effects, and treatment adherence and effectiveness (Estrin & Sim, 2010). Citizen science enthusiasts banded together to test the “Butter Mind” experiment, volunteering to eat half a stick of butter a day and to record diverse metrics to test impact on mental performance (Gentry, 2010). Office workers, or their bosses, can use new software to track computer usage and analyze productivity. Athletes use a range of portable devices to monitor physiological factors that affect performance. And curious individuals track correlations between variables as diverse as stress, mood, food, sex, and sleep (Hill, 2011). Some hail the era of easy self-tracking and the resulting detailed stores of data for their potential to unlock patterns and new knowledge. Others raise privacy, accessibility, and other social concerns. As John Perry Barlow was recently quoted: “Everything you do in life leads to a digital slime trail” (Boudreau, 2011).

This new form of personal data invokes challenges and value judgments at the heart of information science (IS), including best practices for data organization and management; the rights, power, and agency of data collectors and aggregators; the nature of user participation in data collection and new knowledge creation; and what should, or will, be documented, shared, and remembered. This article draws on literature from information science, technology studies, and surveillance studies to investigate granular personal data, collected by and for individuals, as an information problem. It uses this literature to explore two questions: How do these new forms of personal data depart from existing personal data? And how do information perspectives help us analyze the social values, impacts, and concerns engendered by these new data? These questions suggest new research challenges for privacy and information policy, information access and equity, and data management, curation, and preservation.

The article begins by defining participatory personal data, and suggesting how these data differ from existing research and surveillance data. It then explores literature from three information perspectives key to understanding participatory personal data: information policy, information access, and data management and preservation. It uses these perspectives to suggest next steps to investigate participatory personal data as an information science problem.

Defining Participatory Personal Data

  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

This article focuses on a new class of data about people and their environments, generated by an emerging network of embedded devices and specialized software. Software for mobile phones and embedded devices enables individuals to track and upload a diverse range of data, including images, text, location, and motion. Because using ubiquitous digital tools for data capture is a relatively new activity, the terminology used to describe this research is varied, going under names including mobile health or mHealth (Estrin & Sim, 2010), self-quantifying (Wolf, 2010), self-surveillance (Hill, 2011; Kang, Shilton, Burke, Estrin, & Hansen, 2012), participatory sensing (Burke et al., 2006; Estrin, 2010), and urban sensing (Cuff, Hansen, & Kang, 2008). For example, Your Flowing Data ( asks participants to use their mobile phones to send short messages recording data points (e.g., weight, exercise accomplished, mood, or food eaten) throughout the day. After these data are aggregated by the service, the software provides users with visualizations to explore patterns in their daily habits and learn from their data. A different example is the Personal Environmental Impact Report (PEIR) (, an application that uses participants’ mobile phones to record their location every 30 seconds. Location is uploaded to the PEIR servers, which use this time-location series to infer how much a participant drives each day, giving participants a daily calculation of their carbon footprint and exposure to air pollution.

What unifies these projects is the data they collect: participatory personal data. Participatory personal data are any representation recorded by an individual, about an individual, using a mediating technology (which could be as simple as a spreadsheet or web entry form, but which is commonly a mobile device). Participatory signals that these new data are accessible to the data subjects. This contrasts with both traditional research and surveillance data, which are typically obscured or hidden from data subjects (Shilton, 2010). Personal data are authored by an individual, describe an individual, or can be mapped to an individual (Kang, 1998). Participatory personal data often meet all three of these criteria. The individual uses a device to collect data, which are often descriptive of a person's life, routine, or environment. And the data can quite literally be mapped to a person. Geotagged photos or a GPS log of a person's movements throughout a day can be used to identify an individual, even if no names or identifiers are directly attached to the data (Anthony, Kotz, & Henderson, 2007; Iachello, Smith, Consolvo, Chen, & Abowd, 2005). “Participatory personal data,” then, refers to aggregations of representations or measurements collected by people, about people. These data are part of a coordinated activity; they are not only captured, but processed, analyzed, displayed, and shared through a technological infrastructure. This article uses participatory personal data project as an umbrella term for the activity of data capture and participatory personal data to refer to the resulting information resources.

Data Collection Technologies

Participatory personal data are dependent on a technical infrastructure consisting of devices, software, data storage, and user interfaces. The devices are digital, networked, and embedded in human environments. In practice, these are often mobile telephones, due to the ubiquity and accessibility of these devices (Estrin, 2010). But devices could also include tablets, networked body sensors, or instrumented “smart” homes or buildings (Hayes et al., 2007; Kim, Schmid, Charbiwala, & Srivastava, 2009). Software coordinates data collection by triggering samples (often using variables such as time of day, location, or battery life), storing data locally, and deciding when to upload them to central servers for processing (Froehlich, Chen, Consolvo, Harrison, & Landay, 2007). Storage is largely cloud-based, although researchers concerned with privacy have also suggested personal home storage for added data security (Lam, 2009). User interfaces include both web- and phone-based charts, maps, and graphs, which provide aggregation and analysis services, helping users see and interpret patterns in their data. The combination of devices, software, storage, and interfaces form technological platforms (Gillespie, 2010): infrastructures marshaled for new goals and purposes.

Excluded from participatory personal data collection platforms are systems that do not reveal their data to the data subject. For example, instrumented homes that report energy use to a utility company, but not to the homeowner, do not generate participatory data (although they may generate personal data). Opportunistic sensing research, which uses mobile phones to sense macro-scale human activity without the consent or knowledge of individual phone users (Campbell, Eisenman, Lane, Miluzzo, & Peterson, 2006), also does not collect participatory personal data.

Data Collection Projects

The technological platform is only one part of participatory personal data projects. The actors and institutions involved in designing and deploying these platforms are quite diverse. Established corporations and start-ups, academics, community groups, nonprofits, and international nongovernmental organizations are all stakeholders in participatory personal data projects. Research centers at institutions such as UCLA, Dartmouth, MIT, Intel Research, and Microsoft Research are all developing participatory personal data platforms for use in the social, environmental, and health sciences. Corporations such as exercise-tracking company FitBit ( and online-activity tracking service RescueTime ( have developed self-tracking into a business model. Governments, telecommunications providers, health insurers, and advertisers also have an interest in these data (Hill, 2011). The variety of stakeholders collecting participatory personal data will impact the social and political economy of collection, access, and preservation of these data.

Characteristics of Participatory Personal Data

Data collection platforms and projects lend particular characteristics to participatory personal data. Common data types are currently bounded by the technical limitations of the devices used for capture, although these may shift over time as mobile devices become more advanced. Off-the-shelf mobile phones and tablets, by far the most accessible devices for collecting participatory personal data, are currently limited to a handful of on-board sensors. Phones can sense sound (using microphones), images (using cameras), location (using GPS or cell tower information), co-location (of other phones or devices using Bluetooth), and motion (using accelerometers). However, when accessibility or market penetration are not concerns, these on-board capabilities can be almost infinitely extended, as phones can interface with off-board sensors using Bluetooth or other communications protocols.

What unites and also defines these collections of sound, images, locations, and motion is that they are participatory: They are accessible to the subjects of the data themselves. This is a departure from traditional forms of personal data collection. Because granular data collection was expensive and time-consuming, it has historically been conducted by governments or corporations with both the resources to collect data and the social and economic motivations to do so (Marx, 2002). Although fair information practices have long mandated that personal data be made available to individuals upon request (Waldo, Lin, & Millett, 2007), participatory personal data upend this tradition by making individual access to personal data an integral part of collection. This change has been enabled by the proliferation of mobile and computing devices that enable individual capture, processing, and analysis. The participatory nature of such data collection marks a departure from traditional data archives, social science research data, and corporate and government surveillance.

How Participatory Personal Data Depart from Existing Data

  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

In a focus on collecting data about people, participatory personal data projects echo two familiar kinds of data: scientific and social science research data and surveillance data. But because participatory personal data collection is increasingly performed by the data subjects themselves, it also contrasts with previous understandings of both kinds of data. Traditionally, data about individuals might have been collected by researchers, governments, or corporations. But by enabling dispersed data capture and sharing, participatory personal data projects collapse the role of data collectors and data subjects. This raises the issue of participation in data collection, and how participation alters the data landscape.

Surveillance Data

Traditionally, granular personal data collection by corporations or governments has been labeled surveillance. Surveillance studies research suggests that an important element in this data collection is control (Lyon, 2001; Monahan, 2006b). Surveillance may protect people from danger, but it may also be used to prevent undesirable behaviors (Lyon, 2001). As surveillance scholars from Foucault (1979) to Vaz and Bruno (2003) have explained, data collection is often used to normalize and discipline individuals. Foucault's influential work suggests that surveilled populations will supervise and discipline themselves. Indeed, self-discipline is a goal embraced by participatory personal data platforms focused on health interventions or worker productivity. Similar disciplinary effects are seen when communities organize to collect data. The project Nation of Neighbors ( uses mobile devices to expand a community watch model of crime reporting, and civic data project CitySourced ( defines and reports predefined categories of community problems such as “homeless nuisance.”

As this last example suggests, a pernicious effect of surveillance is its potential for uneven application to marginalized and disenfranchised groups. Broad-scale recording of purchase habits, location, and movements enables data sorting and subsequent social profiling by governments and corporations (Curry, Phillips, & Regan, 2004). As Monahan describes it:

… what is being secured are social relations, institutional structures, and cultural dispositions that—more often than not—aggravate existing social inequalities and establish rationales for increased, invasive surveillance of marginalized groups. (Monahan, 2006b, p. ix)

The social relations and institutional structures secured by participatory personal data are still forming. The capture and control of participatory personal data are distributed, and people from marginalized social positions may use the power to collect and analyze data to confront the powerful. Mobile phones have been used for cop-watching and counter-surveillance (Huey, Walby, & Doyle, 2006; Institute for Applied Autonomy, 2006; Monahan, 2006a), peacekeeping and economic development (Donner, Verclas, & Toyama, 2008), and self-exploration and play (Albrechtslund, 2008). In these uses, participatory personal data collection technologies may fit into a tradition of counter-surveillance or sousveillance: the subversion of observation technologies by the less powerful to hold authorities accountable.

Research Data

Data about people have long been important to health, environmental, and social sciences research (Borgman, 2007). When systematically collected for discovery or new knowledge creation, participatory personal data is research data. Such research data about people has traditionally been regulated by federal guidelines such as The Belmont Report (Office of the Secretary of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979) and Title 45 Code of Federal Regulations, Part 46 (Office for Protection of Research Subjects, 2007). These codes emphasize respect for human subjects, beneficence, and justice. A critical component of respect, beneficence, and justice is informed consent, which helps to differentiate research data from surveillance data.

Traditions such as participatory research (PR) go farther than informed consent by arguing that the data capture, sorting, and use performed as part of knowledge discovery can be empowering if it is conducted by the people most affected by the data: research subjects themselves (Cargo & Mercer, 2008). PR practitioners argue that data subjects should be participants not just in data capture, but in analysis of and meaning-making from the data. Involvement with every stage of the research process empowers users and helps justify trade-offs between new knowledge production and research risks for participants (Horowitz, Robinson, & Seifer, 2009). But while participatory ethics foster a stronger notion of consent, they may also complicate design, data capture, aggregation, and analysis practices. Participatory research traditions have been criticized for gathering inaccurate data or incorporating bias. Participants who purposefully withhold sensitive data from research projects may create problems for reliability and accuracy of results. As a variant of participatory research, participatory personal data projects will need to grapple with all of these challenges.

Information Perspectives on Participatory Personal Data

  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

Three areas of traditional expertise in the information sciences—privacy, information accessibility and equity, and information management and preservation—can provide insight into shaping participatory personal data systems that enable participation, new knowledge, and discovery, rather than control. These information perspectives provide theoretical frameworks that help to unpack and assess the social impacts and consequences of these emerging data.

Privacy and Information Policy

One traditional answer to the challenge of surveillance is found in the information science, computer science, legal, and ethics literature focused on privacy. Participatory personal data projects gather, store, and process large amounts of information, creating massive databases of individuals’ locations, movements, images, sound clips, text annotations, and even health data. Location information can reveal habits and routines that are socially sensitive (would you want such information shared with a boss or friend?) and may be linked with, or have an impact on, legally protected medical information. There are three major branches of literature that address privacy problems relevant to participatory personal data. The first are behavioral studies of users' interactions with personal data and data collection systems. The second are conceptual, ethical, and legal investigations into privacy as a human interest. The third are design and technical methods for ensuring privacy protections.

Much of our understanding of current privacy norms stems from work that analyzes people's privacy preferences and behaviors (Altman, 1977; Capurro, 2005; Palen & Dourish, 2003). Westin's (1970) foundational research benchmarked American public opinion on privacy. Over several decades, Westin used large surveys to confirm a spectrum from “privacy fundamentalists” (very concerned) to pragmatic (sometimes concerned) to unconcerned. More recently, Sheehan (2002) confirmed a similar spectrum among Internet users. The Pew Internet & American Life Project has produced several reports of privacy preferences based on large U.S. surveys of adults (Madden, Fox, Smith, & Vitak, 2007) and teenagers (Lenhart & Madden, 2007). These reports continue to find privacy concerns, even among teens (who popular wisdom assumes have abandoned privacy as a value). Pew finds, for example, that teens practice privacy-preserving behaviors such as limiting online information and taking an active part in identity management. A number of information science studies have attempted to describe such behaviors using more detailed scales for online privacy preferences. For example, both Yao, Rice, and Wallis (2007) and Buchanan, Paine, Joinson, and Reips (2007) suggest factors by which to measure online privacy concern. Yao et al. (2007) focus on psychological variables, while Buchanan et al. (2007) incorporate different social aspects of privacy such as accessibility, physical privacy, and benefits of surrendering privacy.

A persistent problem in such surveys of privacy preferences, however, is that individuals frequently report preferences that they do not act upon in practice. There is evidence that many privacy studies prime respondents to think about privacy violations, making them more likely to report privacy concerns (John, Acquisti, & Loewenstein, 2009). These studies also make problematic assumptions that people act on a rational privacy interest (Acquisti & Grossklags, 2008). Studies that observe people's real-world use of systems attempt to correct these problems. Raento and Oulasvirta (2008), for example, present results from field trials of social awareness software that used smart phones to show contacts’ locations, length of stay, and activities. The authors found that

… users are not worried not [sic] so much about losing their privacy rather about presenting themselves appropriately according to situationally arising demands. (Raento & Oulasvirta, p. 529)

This demonstrated concern for contextual privacy and identity management has been reiterated in both theoretical (Nissenbaum, 2009) and descriptive research (boyd & Hargittai, 2010). Nissenbaum (2004) labels this concern for fluid and variable disclosure “contextual privacy” and argues that its absence not only leads to exposure, but also decreasing individual autonomy and freedom, damage to human relationships, and, eventually, degradation of democracy. Nissenbaum (2009) suggests that individuals’ sense of appropriate disclosure, as well as understanding of information flow developed by experience within a space, contribute to individual discretion. Contextual privacy suggests that individuals may be willing to disclose highly personal information on social networking sites because they believe they understand the information flow of those sites (Lange, 2007).

Because of complexities and inconsistencies in individual privacy behaviors, policy and legal researchers have sought to move away from user choices and individual risks and toward new regulations to encourage social protections for privacy (J.E. Cohen, 2008; Rule, 2004; Swarthout, 1967; Waldo et al., 2007). U.S. law does not interpret personal data to be owned by the subject of those data. Instead, legal regimes give control of, and responsibility for, personal data to the institution that collected the data (Waldo et al., 2007). Fair information practices are the ethical standards for collection and sharing that those institutions are asked to follow. Originally codified in the 1970s, these practices are still considered “the gold standard for privacy protection” (Waldo et al., 2007, p. 48), and they have been voluntarily adopted by other nations as well as private entities. Fair information practices such as notice and awareness, choice and consent, access and participation, integrity and security, and enforcement and redress can certainly apply to participatory personal data. But if privacy concerns expand to include processes of enforcing personal boundaries (Shapiro, 1998), negotiating social situations (Camp & Connelly, 2008; Palen & Dourish, 2003), and portraying fluid identities (Phillips, 2002, 2005), fair information practices formulated for protecting corporate and government data may be insufficient for personal collections.

Other researchers suggest that concerns about data capture extend beyond the protection of individual privacy. Curry, Phillips, and Regan (2004) write that data capture makes places and populations increasingly visible or legible. Increasing knowledge about the actions of people and their movements can lead to function creep. For example, collections of demographic data can enable social discrimination through practices such as price gouging or delivery of unequal services. Could participatory personal data gathered to track an individual's health concern or document a community's assets be repurposed to deny health insurance or set higher prices for goods and services?

All of this cross-disciplinary attention points to the fact that building participatory personal data systems that protect privacy remains a challenge. Human–computer interaction research considers ways that systems might notify or interact with users to help them understand privacy risks (Anthony et al., 2007; Bellotti, 1998; Nguyen & Mynatt, 2002). Computer science and engineering research innovates methods to obscure, hide, or anonymize data in order to give users privacy options (Ackerman & Cranor, 1999; Agrawal & Srikant, 2000; Fienberg, 2006; Frikken & Atallah, 2004; Ganti, Pham, Tsai, & Abdelzaher, 2008; Iachello & Hong, 2007). Anonymization of data, in particular, is a hotly debated issue in the privacy literature. Many scholars argue that possibilities for re-identification of data make anonymization insufficient for privacy protection (Narayanan & Shmatikov, 2008; Ohm, 2009). Other researchers are pursuing new anonymization techniques to respond to these concerns (Benitez & Malin, 2010; Malin & Sweeney, 2004).

Privacy approaches for participatory personal data systems draw on a number of these developments (Christin, Reinhardt, Kanhere, & Hollick, 2011). These include limiting sensing by granularity, time of day, location, or social surroundings; providing capture and sharing options to match diverse user preferences; methods to collect and contribute data without revealing identifying information; data retention and deletion plans; and access control mechanisms. All of these methods focus on privacy by design: building features into systems to help users manage their personal data and sharing decisions (Spiekermann & Cranor, 2009). Privacy by design is a promising avenue of research for participatory personal data, and advocacy organizations such as the Center for Democracy & Technology are currently pushing mobile application developers to take responsibility for privacy in their design practices (Center for Democracy & Technology, 2011).

Privacy, of course, is only a relative value, and can frustrate other social goods. As Kang (1998) points out, commerce can suffer from strong privacy rights, as there is less information for both producers and consumers in the marketplace. Perhaps worse, truthfulness, openness, and accountability can suffer at the hands of strict privacy protections (Allen, 2003). Research using participatory personal data directly confronts this trade-off between privacy, truthfulness, and accuracy. For example, researchers are developing algorithms for participatory personal data collection that allow users to replace sensitive location data with believable but fake data, effectively lying within the system (Ganti et al., 2008; Mun et al., 2009). What is good for privacy may not always be good for accuracy or accountability. Investigating privacy and policy for participatory personal data will include weighing these tradeoffs. It will also include integrating elements from contextual privacy and useable system design to present a range of appropriate privacy-preserving choices without unduly burdening participants. And finally, it will mean crafting new policy—institutional as well as national—to protect participants from function creep or discrimination based on their data.

Information Access and Equity

Privacy is not the only research tradition in information science that can inform a discussion about participatory personal data. These data also raise challenges for information access and equity. Who controls data capture, analysis, and presentation? Who instigates projects and sets research goals? Who owns the data or benefits from project data? Accumulating and manipulating information is a form of power in a global information economy (Castells, 1999; Lievrouw & Farb, 2003). Participatory personal data projects invoke this power by enabling previously impossible data gathering and interpretation. How do participatory personal data project designers, clients, and users decide in whose hands this power will reside?

The relationship between information, power, and equity has long been a topic of interest in the information studies literature (Lievrouw & Farb, 2003). A large literature on the digital divide has focused on access to information, and ways that social demographics limit or enhance information access (Bertot, 2003). Participatory personal data evoke these basic questions of accessibility. Anecdotal evidence from popular news reports suggests that hobbyist self-quantifiers are largely American and European, white, and upper-middle class (Dembosky, 2011). Participants in health or urban planning data projects, however, may span a much greater socioeconomic range. Indeed, mobile devices are some of the most accessible information technologies on earth (Kinkade & Verclas, 2008), spanning national, ethnic, and socioeconomic groups.

However, there are challenges beyond accessibility. Lievrouw and Farb (2003) suggest that researchers concerned with information equity take a different approach, emphasizing the subjective and context-dependent nature of information needs and access, even among members of one social group. How do participatory personal data answer context-specific information needs? When individuals are generating the information in question, equity comes to hinge on who benefits from this information capture. Will it be individuals and informal communities, or more organized corporations and governments? Sociologists have proposed that loosely organized publics help to balance power held by formal organizations and governments (Fish, Murillo, Nguyen, Panofsky, & Kelty, 2011). But the rise of participatory culture has challenged this traditional understanding, organizing publics and intermeshing them with organizations. For example, participatory personal data projects exhibit elements of both organizations and publics. Research organizations such as UCLA's Center for Embedded Networked Sensing (CENS, now part of Mobilize Labs: partner with community groups to actively recruit informal groups of participants into participatory personal data projects. Examples include health projects that recruited young mothers not only for data collection, but for focus groups about research design, as well as a community documentation project that engaged neighbors in Los Angeles’ Boyle Heights community. Will organizations like CENS hold the power that design, data, categories, and social sorting can bring, or can it be distributed back to the publics who collect the data? Because data collection methods using mobile devices can range from participatory to opportunistic, it is unclear how much control individuals will have over what data are collected, how they are stored, and what inferences are drawn.

It is important to note that increasing measures for participation does not solve problems of power and equity. As Kreiss, Finn, and Turner (2011) point out, there are limits on the emancipatory potential of peer production. And participatory projects have been criticized for a range of failures, from struggling to create true participation (Elwood, 2006) to being outright disingenuous in their approach and goals (Cooke & Kothari, 2001). The intersection of information systems, values, and culture is also important to consider. Cultural expectations and norms are deeply embedded in the design of information systems, shaping everything from representation of relationships within databases (Srinivasan, 2004, 2007) to the explanations drawn from data (Byrne & Alexander, 2006; Corburn, 2003). The design process is never value-neutral, and questions of what, and whose, values are embodied by software and system architecture have been controversial for decades (Friedman, 1997). Affordances built into a technology may privilege some uses (and users) while marginalizing others. Design areas where bias can become particularly embedded include user interfaces (Friedman & Nissenbaum, 1997), access and input/output devices (Perry, Macken, Scott, & McKinley, 1997), and sorting and categorization mechanisms (Bowker & Star, 2000; Suchman, 1997). The intersections between culture, meaning, and information systems have spurred researchers to experiment with culturally specific databases, media archives, and information systems for indigenous, diasporic, and marginalized communities (Boast, Bravo, & Srinivasan, 2007; Monash University School of Information Management and Systems, 2006; Srinivasan, 2007; Srinivasan & Shilton, 2006). Such “alternative design” projects seek to investigate, expose, redirect, or even eliminate biases that arise in mainstream design projects (Nieusma, 2004).

Participatory personal data projects, however, often adopt a universal rather than relativist vision, taking “everyone” as intended users. What does it mean to design for everyone? As Suchman (1997) points out, designing technology is the process of designing not just artifacts, but also the practices that will be associated with those artifacts. What do designers, implicitly or explicitly, intend the practices associated with participatory personal data projects to be? And how will such practices fit into, clash against, or potentially even reshape diverse cultural contexts?

Management, Curation, and Preservation

Privacy, access, and equity challenges are all affected by an overarching information concern: how participatory personal data projects are managed, curated, and preserved. Metadata creation and ongoing management are necessary to ensure the access control, filtering, and security necessary to maintain privacy for participatory personal data. Accessibility and interpretability of the data by individuals as well as governments and corporations will be dependent on its organization, retrieval, and visualization. And whether and how data are preserved—or forgotten—will be dependent on curation mechanisms heavily reliant on metadata and data structures (Borgman, 2007).

Participatory personal data echo many of the same management concerns found in large scientific data sets (D. Cohen, Fraistat, Kirschenbaum, & Scheinfeldt, 2009; Gray et al., 2005; Hey & Trefethen, 2005). Participatory personal data may consist of large quantities of samples recorded every second or minute for days or months. The data are frequently quantitative measurements dependent on machine processing and descriptive metadata for human comprehension. These characteristics require new techniques for organization and management. Developing such methods for organizing, analyzing, and extracting meaning from large, diverse, and largely quantitative datasets is an emerging challenge for information sciences (Borgman, Wallis, Mayernik, & Pepe, 2007).

Always-on, sensitive data capture brings up a number of theoretical and normative questions about whether and how these data should persist over time. Where and how will these data be curated and preserved? What are the benefits of preserving people's movements, habits, and routines? And what problems might the ubiquitous nature of this memory raise? Participatory personal data present an institutional and logistical challenge for preservation. Like all digital material, methods for long-term preservation are costly and labor-intensive (Galloway, 2004). The distribution of these data across multiple stakeholders, including individuals, research organizations, corporations, and governments, also challenges traditional preservation models based upon clearly defined collecting institutions (Abraham, 1991; Lee & Tibbo, 2007). It is also unclear what institutions will be responsible for preserving data held by individuals and loosely organized publics. Determining who is responsible for authoring, managing, and curating data distributed among individuals will challenge our existing notions of institutional data repositories and professional data management.

Perhaps more difficult is the question of whether we should preserve participatory personal data at all. Historically, a major role of archival institutions was selecting records, keeping only a tiny portion of records deemed historically valuable (Boles, 1991; Cook, 1991). But the explosion of data generation paired with cheap storage and cloud computing raises the possibility of saving much more evidence of daily life. This possibility has become a subject of both celebration (Bell & Gemmell, 2007) and debate (Blanchette & Johnson, 2002). Collections of granular personal data have been invoked to promise everything from improved health care (Hayes et al., 2007, 2008) to memory banks that “allow one to vividly relive an event with sounds and images, enhancing personal reflection” (Bell & Gemmell, 2007, p. 58). And new kinds of personal documentation could help to counteract the power structures that control current archival and memory practices, in which the narratives of powerful groups and people are reified while others are marginalized (Ketelaar, 2002; McKemmish, Gilliland-Swetland, & Ketelaar, 2005; Shilton & Srinivasan, 2007).

As more data are collected and indefinitely retained, however, there may be pernicious social consequences. Blanchette and Johnson (2002) point out that U.S. law has instituted a number of social structures to aid in “social forgetting” or enabling a clean slate. These include bankruptcy law, credit reports, and the clearing of records of juvenile offenders. As information systems increasingly banish forgetting, we may face the unintended loss of the fresh start. Drawing on this argument, Bannon (2006) suggests that building systems that forget might encourage new forms of creativity. He argues that an emphasis on augmenting one human capacity, memory, has obscured an equally important capacity: that of forgetting. He proposes that designers think about ways that information systems might serve as “forgetting support technologies” (2006, p. 5). Mayer-Schoenberger (2007) presents a similar argument, advocating for a combination of policies and forgetful technologies that would allow for the decay of digital data. Information professionals interested in questions of data preservation will find difficult challenges for appraisal and curation in participatory personal data. Determining the nature and organization of the cyberinfrastructure that will support participatory personal data will affect many of these questions about privacy, equity, and memory.

Next Steps

  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

Participatory personal data, as objects of inquiry, present a range of interesting questions for the information sciences. Participatory personal data coexist with broad data surveillance by corporations and governments, and may be used for pernicious ends. But with their emphasis on participation and targeted capture, participatory personal data may simultaneously give people their own ways to use data collection tools and platforms. Much of the social impact of participatory personal data will depend on how data are captured and organized; who has access; whether individuals consent and participate; and how (or whether) data are curated and preserved.

These issues are ripe for investigation by IS researchers. Scholars focused on privacy might design and field test privacy-friendly participatory data systems. They could conduct user studies to evaluate how participatory personal data project participants understand and use their privacy choices. IS researchers could investigate the sensitivity of various participatory personal data, or study how combining multiple data collections might complicate privacy concerns. Or they might use conceptual analysis to answer the challenge of how to incorporate contextual privacy principles into pervasive systems that span social contexts.

Researchers focused on access and equity can analyze power and participation in existing and emerging participatory personal data. They might collect demographic information on the populations participating in, and affected by, participatory personal data projects. IS researchers could interview stakeholders and understand the mix of organizational and informal publics involved in data collection projects. They could question and critique the usefulness of participatory personal data, or they might establish guidelines for making data collection efforts truly participatory.

Investigators in the areas of data management, curation, and preservation should undertake the difficult cyberinfrastructure questions raised by participatory personal data. Ethnographies can reveal the conditions under which participatory personal data project organizers choose open source or proprietary designs. Social network analysis might help to understand the ways in which management techniques or curation mechanisms spread. And philosophical inquiry into memory and forgetting can help to answer the normative questions of which data should be actively curated and which data are better left to digital obscurity.

Finally, these areas of inquiry may proceed simultaneously, but as is clear from the intersections of law, policy, social theory, and data management, these questions span diverse areas within our field and between fields. Continued interdisciplinary conversation can enable findings from conceptual research to impact system design; empirical findings to affect data management; and experience from data management and curation to influence social theory. Conversations in journals such as JASIST are one part of this equation; so too are interdisciplinary conferences and continued funding collaborations. By querying and shaping how participatory personal data are organized and managed; how privacy, consent, and participation are handled in pervasive systems; how participatory personal data affect the balance of power in an information economy; and how such data impact social and institutional memory and forgetting, information scholars can help to shape this emerging information landscape through building systems, constructing information policy, and shaping values in participatory personal data system design.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References

This work is based on material compiled for my doctoral dissertation, “Building Values into the Design of Pervasive Mobile Technologies.” Many thanks to my committee: Jeffrey Burke, Deborah Estrin, Christopher Kelty, Ramesh Srinivasan, and chair Christine Borgman. Their ideas, feedback, and guidance have shaped this work immensely. Thanks also to Jillian Wallis, Laura Wynholds, and Jes Koepfler for comments and suggestions on drafts, and to the anonymous reviewers for their helpful feedback. This work was funded by the National Science Foundation under grant number 0832873.


  1. Top of page
  2. Abstract
  3. Introduction
  4. Defining Participatory Personal Data
  5. How Participatory Personal Data Depart from Existing Data
  6. Information Perspectives on Participatory Personal Data
  7. Next Steps
  8. Acknowledgments
  9. References
  • Abraham, T. (1991). Collection policy or documentation strategy: Theory and practice. American Archivist, 54(Winter), 4452.
  • Ackerman, M.S., & Cranor, L.F. (1999). Privacy critics: UI components to safeguard users’ privacy. In Proceedings of the Human Factors in Computing Systems CHI'99 (pp. 258259). New York: ACM Press.
  • Acquisti, A., & Grossklags, J. (2008). What can behavioral economics teach us about privacy? In A. Acquisti , S. De Capitani di Vimercati , S. Gritzalis & C. Lambrinoudakis (Eds.), Digital privacy: Theory, technologies, and practices (pp. 363377). New York: Auerbach Publications.
  • Agrawal, R., & Srikant, R. (2000). Privacy-preserving data mining. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (pp. 439450). New York: ACM Press.
  • Albrechtslund, A. (2008). Online social networking as participatory surveillance. First Monday, 13(3). Available at:
  • Allen, A.L. (2003). Why privacy isn't everything: Feminist reflections on personal accountability. Lanham, MD: Rowman & Littlefield Publishers.
  • Altman, I. (1977). Privacy regulation: Culturally universal or culturally specific? Journal of Social Issues, 33(3), 6684.
  • Anthony, D., Kotz, D., & Henderson, T. (2007). Privacy in location-aware computing environments. Pervasive Computing, 6(4), 6472.
  • Bannon, L. (2006). Forgetting as a feature, not a bug: The duality of memory and implications for ubiquitous computing. CoDesign, 2(1), 315.
  • Bell, G., & Gemmell, J. (2007). A digital life. Scientific American, 296(3), 5865.
  • Bellotti, V. (1998). Design for privacy in multimedia computing and communications environments. In P.E. Agre & M. Rotenberg (Eds.), Technology and privacy: The new landscape (pp. 6398). Cambridge, MA: MIT Press.
  • Benitez, K., & Malin, B. (2010). Evaluating re-identification risks with respect to the HIPAA privacy rule. Journal of the American Medical Informatics Association, 17(2), 169177.
  • Bertot, J.C. (2003). The multiple dimensions of the digital divide: More than the technology “haves” and “have nots.” Government Information Quarterly, 20(2), 185191.
  • Blanchette, J.-F., & Johnson, D.G. (2002). Data retention and the panoptic society: The social benefits of forgetfulness. The Information Society, 18, 3345.
  • Boast, R., Bravo, M., & Srinivasan, R. (2007). Return to Babel: Emergent diversity, digital resources, and local knowledge. The Information Society, 23(5), 395403.
  • Boles, F. (1991). Archival appraisal. New York: Neal-Schuman Publishers.
  • Borgman, C.L. (2007). Scholarship in the digital age: Information, infrastructure, and the internet. Cambridge, MA: MIT Press.
  • Borgman, C.L., Wallis, J.C., Mayernik, M.S., & Pepe, A. (2007). Drowning in data: Digital library architecture to support scientific use of embedded sensor networks. In Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL ‘07) (pp. 269–277). New York: ACM Press.
  • Boudreau, J. (2011, July 7). Pondering effects of the data deluge. The Los Angeles Times. Retrieved from,0,7170770.story
  • Bowker, G.C., & Star, S.L. (2000). Sorting things out: Classification and its consequences. Cambridge, MA: MIT Press.
  • boyd, d., & Hargittai, E. (2010). Facebook privacy settings: Who cares? First Monday, 15(8). Retrieved from
  • Buchanan, T., Paine, C., Joinson, A.N., & Reips, U.-D. (2007). Development of measures of online privacy concern and protection for use on the internet. Journal of the American Society for Information Science and Technology, 58(2), 157165.
  • Burke, J., Estrin, D., Hansen, M., Parker, A., Ramanathan, N., Reddy, S., Srivastava, M.B. (2006). Participatory sensing. In Proceedings of the International Workshop on World-Sensor-Web (WSW′2006), ACM, Boulder, CO. Available at:
  • Byrne, E., & Alexander, P.M. (2006). Questions of ethics: Participatory information systems research in community settings. In Proceedings of the 2006 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries (SAICSIT '06) (pp. 117126). Somerset West, South Africa: South African Institute for Computer Scientists and Information Technologists.
  • Camp, L.J., & Connelly, K. (2008). Beyond consent: Privacy in ubiquitous computing (Ubicomp). In A. Acquisti , S. De Capitani di Vimercati , S. Gritzalis , & C. Lambrinoudakis (Eds.), Digital privacy: Theory, technologies, and practices (pp. 327343). New York and London: Auerbach Publications.
  • Campbell, A.T., Eisenman, S.B., Lane, N.D., Miluzzo, E., & Peterson, R.A. (2006). People-centric urban sensing. Proceedings of the 2nd Annual International Workshop on Wireless Internet (WICON ′06), Article 18. New York: ACM Press. doi:10.1145/1234161.1234179
  • Capurro, R. (2005). Privacy. An intercultural perspective. Ethics and Information Technology, 7, 3747.
  • Cargo, M., & Mercer, S.L. (2008). The value and challenges of participatory research: Strengthening its practice. Annual Review of Public Health, 29, 325350.
  • Castells, M. (1999). Flows, networks, and identities: A critical theory of the informational society. In M. Castells , R. Flecha , P. Freire , H.A. Giroux , D. Macedo , & P. Willis . Critical education in the new information age (pp. 3764). Lanham, MD: Rowman & Littlefield.
  • Center for Democracy & Technology. (2011). Best practices for mobile applications developers. Washington, DC: Center for Democracy & Technology. Retrieved from
  • Christin, D., Reinhardt, A., Kanhere, S.S., & Hollick, M. (2011). A survey on privacy in mobile participatory sensing applications. Journal of Systems and Software, 84(11), 19281946.
  • Cohen, D., Fraistat, N., Kirschenbaum, M., & Scheinfeldt, T. (2009). Tools for data-driven scholarship: Past, present, future. Ellicott City, MD: Maryland Institute for Technology and the Humanities. Retrieved from
  • Cohen, J.E. (2008). Privacy, visibility, transparency, and exposure. University of Chicago Law Review, 75(1), 181201.
  • Cook, T. (1991). The archival appraisal of records containing personal information: A RAMP study with guidelines. Paris: General Information Programme, United Nations Educational, Scientific and Cultural Organization.
  • Cooke, B., & Kothari, U. (2001). Participation: The new tyranny? London: Zed Books.
  • Corburn, J. (2003). Bringing local knowledge into environmental decision making: Improving urban planning for communities at risk. Journal of Planning Education and Research, 22, 120133.
  • Cuff, D., Hansen, M., & Kang, J. (2008). Urban sensing: Out of the woods. Communications of the ACM, 51(3), 2433.
  • Curry, M.R., Phillips, D.J., & Regan, P.M. (2004). Emergency response systems and the creeping legibility of people and places. The Information Society, 20, 357369.
  • Dembosky, A. (2011, June 10). Invasion of the body hackers. FT Magazine. Retrieved from
  • Donner, J., Verclas, K., & Toyama, K. (2008). Reflections on MobileActive 2008 and the M4D Landscape. and Microsoft Research India. Retrieved from
  • Elwood, S. (2006). Critical issues in participatory GIS: Deconstructions, reconstructions, and new research directions. Transactions in GIS, 10(5), 693708.
  • Estrin, D. (2010). Participatory sensing: Applications and architecture [Internet predictions]. Internet Computing, IEEE, 14(1), 1242.
  • Estrin, D., & Sim, I. (2010). Open mHealth architecture: An engine for health care innovation. Science, 330(6005), 759760.
  • Fienberg, S.E. (2006). Privacy and confidentiality in an e-commerce world: Data mining, data warehousing, matching and disclosure limitation. Statistical Science, 21(2), 143154.
  • Fish, A., Murillo, L.F.R., Nguyen, L., Panofsky, A., & Kelty, C.M. (2011). Birds of the Internet—towards a field guide to the organization and governance of participation. Journal of Cultural Economy, 4(2), 157187.
  • Foucault, M. (1979). Discipline and punish: The birth of the prison. (A. Sheridan, Trans.). New York: Vintage Books.
  • Friedman, B. (Ed.). (1997). Human values and the design of computer technology. Cambridge, UK: Cambridge University Press.
  • Friedman, B., & Nissenbaum, H. (1997). Bias in computer systems. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 2140). Cambridge, UK: Cambridge University Press.
  • Frikken, K.B., & Atallah, M.J. (2004). Privacy preserving route planning. In Proceedings of the 2004 ACM Workshop on Privacy in the Electronic Society (pp. 815). New York: ACM. Retrieved from
  • Froehlich, J., Chen, M.Y., Consolvo, S., Harrison, B., & Landay, J.A. (2007). MyExperience: A system for in situ tracing and capturing of user feedback on mobile phones. In Proceedings of the 5th International Conference on Mobile Systems, Applications and Services, MobiSys ‘07 (pp. 5770). New York: ACM. Retrieved from
  • Galloway, P. (2004). Preservation of digital objects. Annual Review of Information Science and Technology, 38, 549590.
  • Ganti, R.K., Pham, N., Tsai, Y.-E., & Abdelzaher, T.F. (2008). PoolView: Stream privacy for grassroots participatory sensing. In Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems (pp. 281294). New York: ACM Press. Retrieved from
  • Gentry, E. (2010, October 13). Will butter make you smarter? Introducing butter mind … and coconut mind. Quantified Self. Weblog. Retrieved from
  • Gillespie, T.L. (2010). The politics of “platforms .” New Media & Society, 12(3). doi:10.1177/1461444809342738
  • Gray, J., Liu, D.T., Nieto-Santisteban, M., Szalay, A., DeWitt, D. J., & Heber, G. (2005, December). Scientific data management in the coming decade. ACM SIGMOD Record, 34(4), 3441.
  • Hayes, G.R., Abowd, G., Davis, J., Blount, M., Ebling, M., & Mynatt, E. (2008). Opportunities for pervasive computing in chronic cancer care. Pervasive Computing. Lecture Notes in Computer Science, 5013, 262279.
  • Hayes, G.R., Poole, E.S., Iachello, G., Patel, S.N., Grimes, A., Abowd, G., & Truong, K.N. (2007). Physical, social and experiential knowledge in pervasive computing environments. IEEE Pervasive Computing, 6(4), 5663.
  • Hey, T., & Trefethen, A.E. (2005). Cyberinfrastructure for e-science. Science, 308(5723), 817821.
  • Hill, K. (2011, February 25). Adventures in self-surveillance: Fitbit, tracking my movement and sleep. Forbes. Retrieved from
  • Horowitz, C.R., Robinson, M., & Seifer, S. (2009). Community-based participatory research from the margin to the mainstream: Are researchers prepared? Circulation, 119, 26332642.
  • Huey, L., Walby, K., & Doyle, A. (2006). Cop watching in the downtown Eastside. In T. Monahan (Ed.), Surveillance and security: Technological politics and power in everyday life (pp. 149165). New York and London: Routledge.
  • Iachello, G., & Hong, J. (2007). End-user privacy in human-computer interaction. Foundations and Trends in Human-Computer Interaction, 1(1), 1137.
  • Iachello, G., Smith, I., Consolvo, S., Chen, M., & Abowd, G. (2005). Developing privacy guidelines for social location disclosure applications and services. In Proceedings of the 2005 Symposium on Usable Privacy and Security (pp. 6576). New York: ACM Press. Retrieved from
  • Institute for Applied Autonomy. (2006). Defensive surveillance: Lessons from the Republican National Convention. In T. Monahan (Ed.), Surveillance and security: Technological politics and power in everyday life (pp. 167174). New York and London: Routledge.
  • John, L.K., Acquisti, A., & Loewenstein, G. (2009). The best of strangers: Context dependent willingness to divulge personal information (working paper). Pittsburgh: Carnegie Mellon University. Retrieved from
  • Kang, J. (1998). Privacy in cyberspace transactions. Stanford Law Review, 50, 11931294.
  • Kang, J., Shilton, K., Burke, J., Estrin, D., & Hansen, M. (2012). Self-surveillance privacy. Iowa Law Review, 97(March).
  • Ketelaar, E. (2002). Archival temples, archival prisons: Modes of power and protection. Archival Science, 2, 221238.
  • Kim, Y., Schmid, T., Charbiwala, Z.M., & Srivastava, M.B. (2009). ViridiScope: Design and implementation of a fine grained power monitoring system for homes. In Proceedings of the 11th International Conference on Ubiquitous Computing (pp. 245254). New York: ACM Press. Retrieved from
  • Kinkade, S., & Verclas, K. (2008). Wireless technology for social change: Trends in mobile use by NGOs. Washington, DC and Berkshire, UK: The UN Foundation, Vodafone Group Foundation Partnership.
  • Kreiss, D., Finn, M., & Turner, F. (2011). The limits of peer production: Some reminders from Max Weber for the network society. New Media & Society, 13(2), 243259.
  • Lam, M. (2009, April 14). Building a social networking future without Big Brother. Presented at the POMI 2020 Workshop, Palo Alto, CA. Retrieved from
  • Lange, P.G. (2007). Publicly private and privately public: Social networking on YouTube. Journal of Computer-Mediated Communication, 13(1), n.d.
  • Lee, C.A., & Tibbo, H.R. (2007). Digital curation and trusted repositories: Steps toward success. Journal of Digital Information, 8(2). Retrieved from
  • Lenhart, A., & Madden, M. (2007). Teens, privacy and SNS. Washington, DC: Pew Internet & American Life Project.
  • Lievrouw, L.A., & Farb, S.E. (2003). Information and social equity. Annual Review of Information Science and Technology (ARIST), 37, (pp. 499540).
  • Lyon, D. (2001). Surveillance society (1st ed.). Buckingham, UK and Philadelphia: Open University Press.
  • Madden, M., Fox, S., Smith, A., & Vitak, J. (2007). Digital footprints: Online identity management and search in the age of transparency. Washington, DC: Pew Internet & American Life Project.
  • Malin, B., & Sweeney, L. (2004). How (not) to protect genoic data privacy in a distributed network: Using trail re-identification to evaluate and design anonymity protection systems. Journal of Biomedial Informatics, 37(3), 179192.
  • Marx, G.T. (2002). What's new about the “new surveillance”? Classifying for change and continuity. Surveillance & Society, 1(1), 929.
  • Mayer-Schoenberger, V. (2007). Useful void: The art of forgetting in the age of ubiquitous computing (Working Paper No. RWP07-022). Cambridge, MA: Harvard University.
  • McKemmish, S., Gilliland-Swetland, A., & Ketelaar, E. (2005). “Communities of memory”: Pluralising archival research and education agendas. Archives & Manuscripts, 33(1), 146174.
  • Monahan, T. (2006a). Counter-surveillance as political intervention? Social Semiotics, 16(4), 515534.
  • Monahan, T. (Ed.). (2006b). Surveillance and security: Technological politics and power in everyday life. New York and London: Routledge.
  • Monash University School of Information Management and Systems. (2006, June 16). Trust and Technology Project: Building archival systems for indigenous oral memory. Retrieved from
  • Mun, M., Reddy, S., Shilton, K., Yau, N., Boda, P., Burke, J. … Boda, P. (2009). PEIR, the personal environmental impact report, as a platform for participatory sensing systems research. In Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services. (pp. 5568). New York: ACM Press.
  • Narayanan, A., & Shmatikov, V. (2008). Robust de-anonymization of large sparse datasets. In Proceedings of the 2008 IEEE Symposium on Security and Privacy (SP ’08) (pp. 111–125). Washington, DC: IEEE Computer Society.
  • Nguyen, D.H., & Mynatt, E. (2002). Privacy mirrors: Understanding and shaping socio-technical ubiquitous computing systems (GVU Technical Report;GIT-GVU-02-16). Georgia Institute of Technology. Available at:
  • Nieusma, D. (2004). Alternative design scholarship: Working toward appropriate design. Design Issues, 20(3), 1324.
  • Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119158.
  • Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford, CA: Stanford Law Books.
  • Office for Protection of Research Subjects. (2007, March 30). UCLA investigator's manual for the protection of human subjects. Retrieved from
  • Office of the Secretary of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Washington, DC: Department of Health, Education, and Welfare.
  • Ohm, P. (2009). Broken promises of privacy: Responding to the surprising failure of anonymization (Working Paper No. Research Paper No. 09-12). Boulder, CO: University of Colorado Law School. Retrieved from
  • Palen, L., & Dourish, P. (2003). Unpacking “privacy” for a networked world. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03) (pp. 129136). New York: ACM Press.
  • Perry, J., Macken, E., Scott, N., & McKinley, J.L. (1997). Disability, inability and cyberspace. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 6589). Cambridge, UK: Cambridge University Press.
  • Phillips, D.J. (2002). Negotiation the digital closet: Online pseudonyms and the politics of sexual identity. Information, Communication & Society, 5(3).
  • Phillips, D.J. (2005). From privacy to visibility: Context, identity, and power in ubiquitous computing environments. Social Text, 23(2), 95108.
  • Raento, M., & Oulasvirta, A. (2008). Designing for privacy and self-presentation in social awareness. Personal and Ubiquitous Computing, 12, 527542.
  • Rule, J.B. (2004). Toward strong privacy. University of Toronto Law Journal, 54(2), 183225.
  • Shapiro, S. (1998). Places and spaces: The historical interaction of technology, home, and privacy. The Information Society, 14(4), 275.
  • Sheehan, K.B. (2002). Toward a typology of Internet users and online privacy concerns. The Information Society, 18(1), 2132.
  • Shilton, K. (2010). Participatory sensing: Building empowering surveillance. Surveillance & Society, 8(2), 131150.
  • Shilton, K., & Srinivasan, R. (2007). Participatory appraisal and arrangement for multicultural archival collections. Archivaria, 63, 87101.
  • Spiekermann, S., & Cranor, L.F. (2009). Engineering privacy. IEEE Transactions on Software Engineering, 35(1), 6782.
  • Srinivasan, R. (2004). Knowledge architectures for cultural narratives. Journal of Knowledge Management, 8(4), 6574.
  • Srinivasan, R. (2007). Ethnomethodological architectures: Information systems driven by cultural and community visions. Journal of the American Society for Information Science and Technology, 58(5), 723733.
  • Srinivasan, R., & Shilton, K. (2006). The South Asian web: An emerging community information system in the South Asian diaspora. In Proceedings of the Ninth Conference on Participatory Design: Expanding Boundaries in Design (vol. 1, pp. 125133). New York: ACM Press.
  • Suchman, L. (1997). Do categories have politics? The language/action perspective reconsidered. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 91105). Cambridge, UK: Cambridge University Press.
  • Swarthout, A.M. (1967). Eavesdropping as violating right of privacy. American Law Reports (ALR3rd), 11, 1296.
  • Vaz, P., & Bruno, F. (2003). Types of self-surveillance: From abnormality to individuals “at risk.” Surveillance & Society, 1(3), 272291.
  • Waldo, J., Lin, H.S., & Millett, L.I. (2007). Engaging privacy and information technology in a digital age. Washington, DC: The National Academies Press.
  • Westin, A.F. (1970). Privacy and freedom. New York: Atheneum.
  • Wolf, G. (2010, April 26). The data-driven life. New York Times. Retrieved from
  • Yao, M.Z., Rice, R.E., & Wallis, K. (2007). Predicting user concerns about online privacy. Journal of the American Society for Information Science and Technology, 58(5), 710722.