Privacy and Facebook: The Visible and the Invisible
The privacy concerns delineated above are confirmed by several reports and studies on Facebook. In a report on 23 Internet service companies, the watchdog organization Privacy International charged Facebook with severe privacy flaws and put it in the second lowest category for “substantial and comprehensive privacy threats” (“A Race to the Bottom,” 2007). Only Google scored worse; Facebook tied with six other companies. This rating was based on concerns about data matching, data mining, transfers to other companies, and in particular Facebook's curious policy that it “may also collect information about [its users] other sources, such as newspapers, blogs, instant messaging services, and other users of the Facebook service” (“Facebook Principles,” 2007, Information We Collect section, para. 8).
Already in 2005, Jones and Soltren identified serious flaws in Facebook's set-up that would facilitate privacy breaches and data mining. At the time, nearly 2 years after Facebook's inception, users' passwords were still being sent without encryption, and thus could be easily intercepted by a third party (Jones & Soltren, 2005). This has since been corrected. A simple algorithm could also be used to download all public profiles at a school, since Facebook used predictable URLs for profile pages (Jones & Soltren, 2005). The authors also noted that Facebook gathered information about its users from other sources unless the user specifically opted out. As of September 2007, the opt-out choice was no longer available but the data collection policy was still in force (“Facebook Principles,” 2007).
Even the most lauded privacy feature of Facebook, the ability to restrict one's profile to be viewed by friends only, failed for the first 3 years of its existence: Information posted on restricted profiles showed up in searches unless a user chose to opt-out his or her profile from searches (Jones & Soltren, 2005). This glitch was fixed in late June 2007, but only after a technology blogger made the loophole public and contacted Facebook (Singel, 2007). Recent attempts to make the profile restrictions more user-friendly and comprehensive seem mostly PR-driven and still include serious flaws (Soghoian, 2008a).
In September 2006, Facebook introduced the “News Feed,” which tracks and displays the online activities of a user's friends, such as uploading pictures, befriending new people, writing on someone's wall, etc. Although none of the individual actions were private, their aggregated public display on the start pages of all friends outraged Facebook users, who felt exposed and deprived of their sense of control over their information (boyd, 2008). Protest groups formed on Facebook, among them the 700,000-member group “Students Against Facebook News Feed” (Romano, 2006, para. 1). Subsequently, Facebook introduced privacy controls that allowed users to determine what was shown on the news feed and to whom.
Additional concerns have been raised about links between Facebook and its use by government agencies such as the police or the Central Intelligence Agency. In a rather benign example, a police officer resorted to searching Facebook after witnessing a case of public urination outside a fraternity house at University of Illinois at Urbana-Champaign and the only other witness on the scene claimed not to know the lawbreaker. Once on Facebook, the officer searched the man's friend list and the lawbreaker he was looking for. The first man received a $145 ticket for public urination; the other received a $195 ticket for obstructing justice (Dawson, 2007). Additionally, the Patriot Act allows state agencies to bypass privacy settings on Facebook in order to look up potential employees (NACE Spotlight Online, 2006). An online presentation “Does what happens in the Facebook stay in the Facebook?” (2007) points out a number of connections between various Facebook investors and In-Q-Tel, the not-for-profit venture capital firm funded by the CIA to invest in technology companies for the CIA's information technology needs. The chief privacy officer of Facebook, Chris Kelly, accused the video of “strange interpretations of our policy” and “illogical connections” but did not substantially rebut the allegations (Kelly, 2007).
Further criticism is based on the fact that third parties can use Facebook for data mining, phishing, and other malicious purposes. Creating digital dossiers of college students containing detailed personal information would be a relatively simple task—and a clever data thief could even deduce social security numbers (which are often based on 5-digit ZIP codes, gender, and date of birth) from the information posted on almost half the users' profiles (Gross & Acquisti, 2005). Social networks are also ideal for mining information about relationships or common interests in groups, which can be exploited for phishing. For example, Jagatic, Johnson, Jakobsson, and Menczer (2005) launched a phishing experiment at Indiana University on selected college students, using social network sites to get information about students' friends. The experiment had an alarmingly high 72 percent success rate within the social network as opposed to 16 percent within the control group. The authors add that other phishing experiments by different researchers showed similar results, “We must conclude that the social context of the attack leads people to overlook important clues, lowering their guard and making themselves significantly more vulnerable” (Jagatic et al., 2005, p. 5). A high level of vulnerability is also engendered by the fact that many users post their address and class schedule, thus making it easy for potential stalkers to track them down (Acquisti & Gross 2006; Jones & Soltren 2005). Manipulating user pictures, setting up fake user profiles, and publicizing embarrassing private information to harass individuals are other frequently reported forms of malicious mischief on Facebook (Kessler, 2007; Maher, 2007; “Privacy Pilfered,” 2007; Stehr, 2006).
While Facebook's privacy flaws are well documented and have made it into the news media, relatively little research is available on how exactly these problems play out in the social world of Facebook users and how much users know and care about these issues. In their small-sample study on Facebook users' awareness of privacy, Govani and Pashley (2005) found that more than 80 percent of participants knew about the privacy settings, yet only 40 percent actually made use of them. More than 60 percent of the users' profiles contained specific personal information such as date of birth, hometown, interests, relationship status, and a picture.
Tufekci (2008) also asserted that students may try “to restrict the visibility of their profile to desired audiences but are less aware of, concerned about, or willing to act on possible ‘temporal’ boundary intrusions posed by future audiences because of persistence of data” (p. 33). The most obvious and readily available mechanism to control the visibility of profile information is restricting it to friends. However, Ellison, Steinfield, & Lampe (2007) discovered that only 13 percent of the Facebook profiles at Michigan State University were restricted to “friends only.” Also, the category “friend” is very broad and ambiguous in the online world; it may include anyone from an intimate friend to a casual acquaintance or a complete stranger of whom only their online identity is known. Though Jones and Soltren (2005) found that two-thirds of the surveyed users never befriend strangers, their finding also implies that one-third is willing to accept unknown people as friends.
This is confirmed by the experiment of Missouri University student Charlie Rosenbury, who wrote a computer program that enabled him to invite 250,000 people to be his friend, and 30 percent added him as their friend (Jump, 2005). Similarly, the IT security firm Sophos set up a fake profile to determine how easy it would be to data-mine Facebook for the purpose of identity theft. They found that out of 200 contacted people, 41 percent revealed personal information by either responding to the contact (and thus making their profile temporarily accessible) or immediately befriending the fake persona. The divulged information was enough “to create phishing e-mails or malware specifically targeted at individual users or businesses, to guess users' passwords, impersonate them, or even stalk them” (“Sophos Facebook ID,” 2007)
These findings show that Facebook and other social network sites pose severe risks to their users' privacy. At the same time, they are extremely popular and seem to provide a high level of gratification to their users. Indeed, several studies found that users continually negotiate and manage the tension between perceived privacy risks and expected benefits (Ibrahim, 2008; Tufekci, 2008; Tyma, 2007). The most important benefit of online networks is probably, as Ellison, Steinfield, & Lampe (2007) showed, the social capital resulting from creating and maintaining interpersonal relationships and friendship. Since the creation and preservation of this social capital is systematically built upon the voluntary disclosure of private information to a virtually unlimited audience, Ibrahim (2008) characterized online networks as “complicit risk communities where personal information becomes social capital which is traded and exchanged” (p. 251). Consequently, social network site users are found to expose higher risk-taking attitudes than individuals who are not members of an online network (Fogel & Nehmad, 2008).
It can therefore be assumed that the expected gratification motivates the users to provide and frequently update very specific personal data that most of them would immediately refuse to reveal in other contexts, such as a telephone survey. Thus, social network sites provide an ideal, data-rich environment for microtargeted marketing and advertising, particularly when user profiles are combined with functions that track user behavior, such as Beacon. This commercial potential may explain why Facebook's valuation has reached astronomical levels, albeit on the basis of speculation. Since Microsoft's fall 2007 expression of interest in buying a 1.6 percent stake for $240 million, estimates of the company's value have ranged as high as $15 billion (Arrington, 2008; Sridharan, 2008; Stone, 2007).
For the average user, however, Facebook-based invasion of privacy and aggregation of data, as well as its potential commercial exploitation by third parties, tend to remain invisible. In this respect, the Beacon scandal was an accident, because it made the users aware of Facebook's vast data-gathering and behavior surveillance system. Facebook's owners quickly learned their lesson: The visible part of Facebook, innocent-looking user profiles and social interactions, must be neatly separated from the invisible parts. As in the case of an iceberg, the visible part makes up only a small amount of the whole (see figure 1).
The invisible part, on the other hand, is constantly fed by the data that trickle down from the interactions and self-descriptions of the users in the visible part. To maintain the separation (and the user's motivation to provide and constantly update his or her personal data), any marketing and advertising based on these data must be unobtrusive and subcutaneous, not in the user's face like the original version of Beacon.
The conceptual framework of our research is a combination of three media theories: the “uses and gratifications” theory, the “third-person effect” approach, and the theory of “ritualized media use.”
While this study does not test these three media theories, they are relevant as an analytical background and a framework from which to explain and contextualize our findings. The uses and gratifications approach looks at how people use media to fulfill their various needs, among them the three dimensions of (1) the need for diversion and entertainment, (2) the need for (para-social) relationships, and (3) the need for identity construction (Blumler & Katz, 1974; LaRose, Mastro, & Eastin 2001; Rosengren, Palmgreen, & Wenner, 1985). We assume that Facebook offers a strong promise of gratification in all three dimensions—strong enough to possibly override privacy concerns.
The third-person effect theory states that people expect mass media to have a greater effect on others than on themselves. This discrepancy between self-perception and assumptions about others is known as the perceptual hypothesis within the third-person effect approach (Brosius & Engel, 1996; Davison, 1983; Salwen & Dupagne, 2000). Though this approach has far-reaching implications with respect to people's support for censorship (known as the behavioral hypothesis), our interest is mostly focused on the perceptual side: How do Facebook users perceive effects on privacy caused by their use of Facebook and which consequences do they draw from this? Together with the uses and gratification theory, the third-person effect would explain a certain economy of effect perception, (i.e., negative side effects are ascribed to others, while the positive effects are ascribed to oneself).
The theory of ritualized media use states that media are not just consumed for informational or entertainment purposes, they are also habitually used as part of people's everyday life routines, as diversions and pastimes. Media rituals are often connected to temporary structures, such as favorite TV shows at a particular time, and to specific social rituals, such as ritualized meetings of friends to watch a favorite TV show, etc. (Couldry, 2002; Liebes & Curran, 1998; Pross, 1992; Rubin, 1984). It can be expected that the use of Facebook is at least to some degree ritualized and (subcutaneously) built into its users' daily life—a routinization (Veralltäglichung) in the sense of Max Weber (1972/1921). In conjunction with the two other approaches, this theory would further explain the enormous success of Facebook and users' lack of attention to privacy issues.
Based on the literature and theories examined above, the following four hypotheses for the survey and four open-ended research questions to guide the interviews were proposed:
H1: Many if not most Facebook users have a limited understanding of privacy issues in social network services and therefore will make little use of their privacy settings.
H2a: For most Facebook users, the perceived benefits of online social networking will outweigh the observed risks of disclosing personal data.
H2b: At the same time, users will tend to be unaware of the importance of Facebook in their life due to its ritualized use.
H3: Facebook users are more likely to perceive risks to others' privacy rather than to their own privacy.
H4: If Facebook users report an invasion of personal privacy they are more likely to change their privacy settings than if they report an invasion of privacy happening to others.
RQ1: How important is Facebook to its users and which role does it play in their social life?
RQ2: To what extent is Facebook part of everyday rituals or has created its own rituals?
RQ3: Which role does Facebook play in creating and promoting gossip and rumors?
RQ4: Which negative effects, particularly with respect to privacy intrusions, does Facebook have?