Personalization Governing Uncertain Collective Risk Through Individual Decisions

Individuals are regularly made responsible for risks they wish to take: one can consent to processing of personal data, and decide what to buy based on risk information on product labels. However, both large ‐ scale processing of personal data and aggregated product choices may carry collective risks for society. In such situations, governance arrangements implying individual responsibility are at odds with uncertain collective risks from new technologies. We, therefore, investigate the governance challenges of what we call risk personalization: a form of governance for dealing with uncertain collective risks that allocates responsibility for governing those risks to individuals. We situate risk personalization at the intersection of two trends: governance of uncertain risk, and emphasis on individual responsibility. We then analyze three cases selected based on diversity: social media, nano materials, and Uber. Cross ‐ case comparison highlights issues of risk personalization pertaining to (i) the nature of the risk, (ii) governance arrangements in place, and (iii) mechanisms for allocating responsibility to individuals. We identify governance challenges in terms of (i) meaningful choice, (ii) effectiveness in mitigating risk, and (iii) collective decision making capacity. We conclude that the risk personalization lens stimulates reflection on the effectiveness and legitimacy of risk governance in light of individual agency.


Introduction
As an individual, one can decide whether to give consent for processing personal data when downloading an app or visiting a website. This decision may reflect a tradeoff between the benefits of using the service and the risks associated with misuse of the personal data involved. Similarly, one may assess the information on product labels in the decision what product to buy, for example in the case of cosmetics containing nanomaterials. Again, in making a choice one may weigh the positive effects of the product against the health risks of nanomaterials. However, collective risks such as large-scale processing of personal data by app providers and associated profiling and discrimination, or potential harmful effects of nanomaterials from large-scale use of cosmetics on the environment, may not be taken into account in such individual decisions. Furthermore, in both cases, it may not be clear whether enough information is available to make a well-founded decision.

Problem Statement
In this paper, we investigate the phenomenon that we call "risk personalization": a form of governance for dealing with uncertain collective risks that allocates responsibility for governing those risks to individuals. Where the effects of risk-taking are felt by individuals themselves, such individualized risk governance arrangements may make a lot of sense. But the existence of tensions between collective risk and individual decisions is well-known, and for example described as a governance challenge in the tragedy of the commons (Hardin, 1968) and particularly in terms of technology development and modernization by Beck and Giddens (cf. Ekberg, 2007;Giddens, 1999).
We see governance arrangements involving individualized responsibility for risk emerging in many more cases in the context of new technology development, such as social media, nanotechnology, and taxi apps, where collective risks are a major issue, without appropriate discussion on these matters. Contrary to responsibilization, in which more individual responsibility is the purpose of or at least an explicit choice in governance, many developments have individual responsibility as a consequence, without explicit analysis or consideration of the pros and cons of this in terms of risk governance of collective risks. It is this class of issues that is the topic of this paper.
Our starting point is that applying or allowing for the development of personalized governance arrangements for uncertain collective risks may itself be risky in terms of recreating tensions between individual decisions and collective consequences. By analyzing these tensions in more detail for a diverse set of individual risk governance arrangements, associated governance challenges can be more specifically identified. This paper asks the question: What are properties of and differences between governance arrangements in which responsibility for collective risks is allocated to individuals?

Method
To understand the phenomenon better, two developments and their interactions in the governance of risk in today's risk society are described in more detail (i.e., governance of uncertain risk, and emphasis on individual responsibility) as an explanation of the constituents of the risk personalization concept.
Next, three empirical examples of risk personalization are explored; that is, cases where individuals bear responsibility for governing risks of new and uncertain technologies. Cases were selected based on diversity, that is, the diverse cases approach described by Seawright and Gerring (2008). The variables for which diversity is sought are (i) the nature of the risk and (ii) the existing governance arrangements in place. For the former, risk domains are selected which are strongly affected by new technology development (physical safety, data security, and environmental safety), focussing on both individual and collective components. For the latter, cases ranging from well-established and explicit arrangements for the individual governance of risk (strong governance) to more implicit forms of risk governance via risk personalization and an apparent lack of regulation (weak governance) were selected. Taking these considerations into account, privacy waivers on social media (data security, strong governance), the labeling of nanomaterial containing cosmetics (environmental safety, medium governance) and taxi apps (Uber) and the responsibility for road safety (physical safety, weak governance) were selected.
The cases are explored and subsequently compared in a cross-case analysis to determine common properties and differences. The cross-case analysis is grounded in the framework provided by second section and identifies: (i) the risk domain and sources of uncertainty, (ii) the governance arrangement to deal with risks, and (iii) the way in which emphasis is put on individual responsibility. The relevant dimensions emerge from differences which could be observed in the different cases. The paper concludes by assessing and explaining how risk personalization helps as a lens for critically assessing current risk governance arrangements and the resulting challenges, extracting key challenges and policy implications of the governance scheme of risk personalization.

Theoretical Framework
In this section, the concept of risk personalization is developed, which is defined as a form of governance for dealing with uncertain collective risks that allocates responsibility for governing those risks to individuals. This is done by situating this concept against the background of the risk society, and highlighting major developments related to the governance of (collective) risks of new technologies: governance of uncertain risk and the fragmentation of governance regimes, and emphasis on responsibility of individuals.

Governance of Uncertain Risk
The understanding that new technologies can profoundly influence both the user of the technology as well as (unprepared) third parties is well known and addressed in fields such as public economics, sociology and law. Apart from generating potential risks for individual use, new technologies can produce collective risks, that is, their (negative) consequences can affect larger groups of people and the environment they live in (cf. Dorbeck-Jung & Bowman, 2017;examples are provided in Harremoës et al., 2001). Giddens (1999, p. 4) considers these examples of manufactured risk, that is, "risk created by the very progression of human development"; key indicators of the post-modernist "risk society," which creates as many new uncertainties and risks as those that are solved through scientific progress. A key challenge of the governance of new technologies has been how to deal with negative "externalities," which traditionally resulted in the development of institutions to govern collective risk (van Waarden, 2001). Examples of technologies that are primarily publicly governed via this regime are large-scale, complex and capital intensive technologies such as nuclear power, aviation, and more recently carbon capture and storage projects.
However, as predicted by Beck in the late 1980s, today's governance of new and uncertain technologies has become more decentralized and consequently also more fragmented as a result of a number of more generic centrifugal trends such as the increased role of markets and networks in governance (cf. Bevir, 2012) as well as the rise of multi-level governance (i.e., the hollowing out of the state) and "the loss of functions upwards to the European Union, downwards to special-purpose bodies and outwards to agencies" (Rhodes, 1997, p. 17). Governments in western society have retreated in (risk) regulation from classes of (societal) risks, giving way to more complex, and diverse multi-level forms of governance in which different actors and stakeholders play a role (cf. Renn, Klinke, & Van Asselt, 2011). Hood, Rothstein, and Baldwin (2001) identify "variations in the ways risks and hazards are handled across policy domains" (Hood et al., 2001, p. 6), resulting in an "archipelago" of risk domains in which modern states sometimes enforce risk regulation via heavy arrangements, whereas in other domains state governments have reduced their role and yielded to different risk regulatory regimes in which self-regulation or behavior of individuals or consumers plays a more important role (Hutter, 2010) introducing more distributed forms of responsibility for the governance of risk (Wiener, 2010, p. 140), making it both less obvious and more difficult to govern collective risks centrally, thereby providing fertile ground for risk personalization arrangements.
For example, with regard to natural hazards, citizens are "gradually transformed into risk managers and active participants of the multi-scale risk governance network as they are encouraged or even required to take more responsibility for their actions. This process of "responsibilization" (Garland, 1996) and "privatization of risk" (cf. Kuhlicke, 2011) pushes individuals to engage in risk governance.
In recent years, risk governance scholars and risk practitioners have reached similar conclusions as they found that classic risk management approaches do not suffice as governance regime for new technologies; other governance regimes are required under conditions of uncertainty (WRR, 2009). New technologies (especially the more disruptive ones) create new uncertainties about their impact (Beck, 1992). These may be hazards that turn out to be hard to quantify (known unknowns), but can also be undesirable consequences that were unconceivable at the time when the new technology entered the market (unknown unknowns) (Taleb, 2008). A well-known example of this last form of uncertainty can be found in CFCs as refrigerator coolants and their until the 70s largely unknown effects on the ozone layer (Mullin, 2002). It was precisely effects like this, that caused Beck to argue that "our collective safety, security, and survival are compromised because the anonymous and cumulative risks are characterized by organized irresponsibility, unaccountability and uninsurability" (Ekberg, 2007, p. 349).
To this end, state-of-the-art thinking in risk governance considers that the management of risk of new technologies needs to be thoroughly discussed between a wide range of stakeholder groups (Klinke & Renn, 2012;Renn & Schweizer, 2009). The basic tenet is that as expertise, agency and use of new technologies is distributed, so too should risk governance be. Instead of actively governing risk, governments have taken on a facilitating role, seeking to connect the various stakeholders to discuss risk.

Emphasis on Individual Responsibility
A key development resides in the changed role of individuals in decision making in general, and in responsibility for risk decisions in particular. In the general sense, several analyses point to the fact that more choices can and should be made by individuals in contemporary society, increasing both the freedom and the responsibility of those individuals. This empirical and/or philosophical observation (or diagnosis) of modern society is what Beck (1992) refers to as individualization.
At the same time, trends in governance and within the context of a neoliberalist political paradigm make it attractive to shift responsibility towards individuals as a governance arrangement, which may happen precisely because existing governance structures have become hollowed out and fragmented. Governance arrangements are defined by Delemarle and Larédo (2014, p. 160) as "the constellation of arenas and their dynamics […] that are aligned in a robust manner." Giving individuals control over their own situation may be seen as an incentive for them to improve their position. This governance arrangement has been termed "responsibilization" (Eckhardt & Dobscha, 2018;Giesler & Veresiu, 2014;O'Malley 1996;Shamir, 2008) and can be defined as "a process whereby subjects are rendered individually responsible for a task which previously would have been the duty of anotherusually a state agency-or would not have been recognized as a responsibility at all" (Juhila, Raitakari, & Hansen Löfstrand, 2019).
Responsibilization may also work as an arrangement when dealing with risks of new technologies. In these more individualized governance regimes, responsibility for uncertainty is attributed to different actors. Key roles are reserved for two actor groups. On one side of the spectrum, there are the technology developers, who as part of corporate social responsibility endeavor to create new knowledge about risks as they develop the market and exploit new technologies (Dorbeck-Jung & Shelley-Egan, 2013). On the other side of the spectrum, an equally important role is left to individual consumers. Individual consumers, through their consuming behavior, are "expected to address a wide variety of social issues through their individual consumption choices" (Eckhardt & Dobscha, 2018, p. 1).

Risk Personalization
When the governance landscape is fragmented, and/or suitable governance arrangements are not yet in place, allocating responsibility for risk decisions to individuals may seem an acceptable solution. Individual users of technologies make choices about how, when and where they use technology, and seem in a key position to assess and observe the impact of those choices with regard to their individual risk over time. In other words, responsibilization could be considered a feasible risk governance arrangement under conditions of uncertainty and fragmentation, which explains the connection between the abovementioned developments. With responsibilization, more individual responsibility is the purpose of risk governance.
In a general setting which emphasizes individual responsibility, however, leaving decisions to individuals may also happen without explicit responsibilization arrangements, and thereby also lack of consideration of alternatives. In those cases, individual responsibility may for example be the consequence of legal frameworks, lack of collective decisions, etc. The externalities that collective forms of risk governance were meant to tackle can (re)appear in such situations. The aggregated decisions of those individuals may then lead to (new) collective risks. It is those developments that the risk personalization lens aims to uncover.

Three Cases of Risk Personalization
In this section, three cases of risk personalization are presented and subsequently analysed to illustrate and explore risk personalization in practice. A diverse set of cases encompassing different risk domains (physical and environmental safety, security) and governance arrangements (strong vs. weak) are selected. The case studies are small and descriptive and serve to illustrate "different characteristics of a phenomenon in its context" to allow for exploration of the risk personalization concept (Baškarada, 2014, p. 4). All three cases show how individuals via their role as consumers are expected to manage individual risks and also end up (intendedly or unintendedly) managing collective risks. The cases generate information about the governance of technology use and risk management under these conditions. In all three cases, there exists a collective risk resulting from individual technology use, where there is uncertainty about the governance of those risks and lack of clarity about who governs them, while at the same time the responsibility for the governance is placed at the level of the individual consumer/ user. It should be emphasized that the case studies do not necessarily seek to represent the full extent of risk personalization and its implications, but merely to show the value of this concept under diverse governance settings and regarding different technologies.

Social Media and Privacy Waivers
Since 2005, there have been several social media platforms in use, with Facebook, Instagram, and WeChat as the biggest, connecting billions of people worldwide. At first glance, platforms such as Facebook and Instagram seem nothing more than public places for social networking, exchange of information and communication. And like other public spaces, risks to privacy are prevalent in the ethical and public debates about these platforms (Weir, Toolan, & Smeed, 2011). Apart from serious crimes such as identity theft, issues covered include risks of being refused a job based on social network pictures, or location data being used for unwanted purposes. These emerge because of design weaknesses that give unauthorized parties (hackers) access to the data, but also because of legitimate use of the associated services by both other users (checking social network sites of applicants) and by the service providers (using the data acquired for selective treatment of citizens).
Over time, ethical concerns over these platforms have moved beyond individual privacy (Anonimized). First, because users often do not only publish personal information about their own lives but also about the lives of their friends and family. Second, the business models of most of these platforms lies in being able to predict what kind of person someone is (Hildebrandt, 2008), for example in order to show targeted advertisements or to decide who gets a loan and who does not. For such business models, the data of just one individual is useless; profiling data only make sense in the context of a broader population. When one possesses profiles of a large group, one can then make decisions based on those profiles, potentially leading to undesirable forms of bias and discrimination. Indeed, there is an accumulation issue: new risks emerge with increasing numbers of users supplying data. This has led some researchers to speak of a "tragedy of the privacy/data commons" (Fuchs, 2010;Yakowitz, 2011). Third, recent events around the intervention of Cambridge Analytica in the 2016 US presidential campaign shows that collective risk not only pertains to the commodification of personal information but that the large-scale accumulation of data also enables large-scale manipulation of voters on social media (The Guardian, 2018).
Furthermore, the governance landscape is fragmented. Increasing societal indignation puts pressure on (state) regulators and technology providers to provide consumers and societal actors such as governments and regulators more insight in the business practices of online platforms, and to counter online phenomena resulting from technology use such as targeted fake news. Privacy legislation, such as the EU General Data Protection Regulation (GDPR), creates new rules that govern the behavior of online providers, limiting for which purposes data can be used, and demanding individual consent as a form of choice architecture. Following the consent principle, the responsibility for governing the risks of data-release/sharing/storage is placed with individual users, via ex ante consent mechanisms and privacy waivers. Despite collective government arrangements, users are asked to check privacy agreements and privacy updates as they install and use social media platforms. These "solutions" have remained problematic. For example, individual users are unable to vary their use of media applications over time or based on the specific conditions they find themselves in. Thus, the options users are left with are either to opt-in or opt-out. In addition, although visibility of data to other users is often configurable, what cannot be selected is what data is shared with the social network provider itself.
Lastly, the choices imposed by the consent principle suggest that social media consumers are only dealing with individual risks, which could be managed through individual decisions. This goes back to privacy as the right to be left alone, informational privacy being one aspect of this (van den Hoven, Blaauw, Pieters, & Warnier, 2018). The market logic of a privacy waiver assumes that if individuals give explicit permission for the use of personal data by specific parties for specific purposes, then this shows that they must be ok with this arrangement. However, this individual waiver ignores shifting power balances through the large-scale gathering of data by certain players. Although individuals might be able to make decisions based on the consequences for themselves, the accumulation of power through data at the side of the service provider may not be factored in sufficiently in individual decisions. The uncertainty from a risk management perspective follows from the fact that it remains (relatively) obscure how platform companies use individual data as well as data obtained from others for other purposes.

Labeling of Nanomaterials in Cosmetics
Developments in fields like material science, chemistry, and physics have made it possible to produce nanoscale particles (1-100 nm). Nanomaterials are used in a variety of products and industries, such as cosmetics, foodstuffs, coatings, and construction materials (e.g., Cushen, Kerry, Morris, Cruz-Romero, & Cummins, 2012;Lee, Mahendra, & Alvarez, 2010). In cosmetics, nanomaterials are mostly used to improve product properties such as solubility and translucence. Examples of nanomaterials are nano titanium dioxide for producing transparent sunscreen, or nanosilver for its antibacterial effects.
However, experts are uncertain about the hazardous effects of such nanomaterials (Borm et al., 2006). There are indications that exposure to some nanomaterials may cause adverse health effects such as pulmonary diseases. It is known that nanomaterials may be easily taken up in the body after inhalation and ingestion. In addition to health risk, nanomaterial containing products may wash off and accumulate in the environment, though, release of nanomaterials from consumer products and solid composites have proven hard to model (Mackevica & Hansen, 2016).
The governance of nanomaterial use and production has been on the public and research 1 agenda of governments in western countries from early on. A proactive approach to risk regulation has been fed by sentiments living in the nano-sector to "avoid GMO like situation" (personal communication). In the European Union, the public debate on GMO has mandated a de facto moratorium on its use and consumption. Nonetheless, devising clear-cut risk regulation for nanomaterials is no easy task. Some argue that the Precautionary Principle should apply to nanomaterials (Van Broekhuizen & Reijnders, 2011), whereas others worry that too strict regulation may inhibit innovation. For years now, a debate takes place amongst specialists whether nanomaterials as a group of compounds actually warrants special treatment. "Nano" is not a label that indicates a level of hazard (such as irritants) or particular behavior (such as Persistent organic pollutants), the definition of nanomaterials used by the European union is foremost an indication of scale (European Commission, 2011), 2 a scale that may come with new properties and behaviors, but is not by definition hazardousness. 3 In the wake of these discussions, the European Commission issued regulation that provides the legal contours for market access of cosmetics (Bowman, May, & Maynard, 2018). 4 These include requirements for the sharing of chemical and safety data, foreseeable exposure conditions, and ingredient listings in the Cosmetic Product Notification Portal. One of the rationales for this regulation was to increase transparency and information exchange about nanomaterials, in response to the socalled "information void" around nanomaterials (Bowman & Ludlow, 2009). The regulation obliges producers and importers of nanomaterial containing cosmetics to label nanomaterial content in their ingredient list by adding the word "nano" in brackets following the name of the substance. This is seen as a way of promoting transparency regarding the use of nanomaterials in cosmetics (Bowman et al., 2018) as it arranges the availability of information on nanocontent on the packages of several products to consumers.
The regulation also works to responsibilize individual consumers for the governance of uncertain nanomaterial risk. The regulation ensures that consumers have information about nanomaterial content in their products, thereby introducing a consumer choice that enables consumers to assess and decide whether they want to use a product that contains nanomaterials. This regulation has been criticized for being symbolic as known safety risks will always fall under the responsibility of producers. However, through this regulation the European Commission has also distributed responsibility over the cosmetics value chain, thereby allocating responsibilities for unknown and unquantifiable risk to consumers (Shelley-Egan & Bowman, 2015). Individual consumers may not easily fulfill these responsibilities, as most experts agree it requires additional testing and safety information to make fully informed choices.

Uber and Responsibility for Road Safety
Since its start in the early 2010s, the transportation network company Uber has been presented as the vanguard technological platform for the sharing economy heralding the end of the "traditional" mobility industry. Uber provides a platform for individual car owners to offer taxi services to Uber clients, without being part of the existing taxi business. From the onset, the legitimacy of Uber has been challenged, as it enables any individual owning a car to provide a service that is restricted in most countries to a specific group of providers (i.e., taxi companies) and/or professionals (taxi drivers). As such Uber has been disruptive: it provides a technology that breaks open the taxi industry and creates new competition (Cramer & Krueger, 2016). Competition that is not always deemed fair in a market with small margins which has left many taxi business owners vulnerable (Rushe, 2019;Sainato, 2019).
In addition to these concerns, questions have recently risen about the impact of Uber on road safety. For example, in the Netherlands, there was public outrage over an Uber driver who was involved in a deathly accident with a pedestrian ( Van Bergeijk, 2018). Research has shown that "UberX drivers spend a significantly higher fraction of their time, and drive a substantially higher share of miles, with a passenger in their car than do taxi drivers" (Cramer & Krueger, 2016, p. 177). The Uber workforce is thus highly efficient in picking up new passengers. This has to do with the wide use and flexibility of the technology, the Uber-app effectively matches and connects drivers and passengers that are close by, and both drivers and passengers are incentivized to respond and connect supply and demand via dynamic pricing. To facilitate responsiveness, the Uber app actively monitors and steers driver behavior. Furthermore, Uber actively incentivizes driver behavior by introducing additional reward and badge systems for extra earnings and free college tuitions (Hawkins, 2018). These initiatives have been interpreted as ways to manipulate drivers into less attractive labor (e.g., at night or during holidays) using "video game techniques, graphics and noncash rewards of little value that can prod drivers into working longer and harder-and sometimes at hours and locations that are less lucrative for them" (Hawkins, 2018, p. 1). As Uber brings an inexperienced and unregulated workforce on the streets, and actively nudges driver efficiency, the app may introduce road safety risks.
Road safety is not a new governance concern. What is new is that with Uber technology allegedly road behavior changes: a new group of drivers uses the roads more intensively, they have less experience, and their behavior behind the wheel is influenced and incentivized by technology to be responsive to instant pickup demands, which may affect their alertness for traffic. Compared to the heavily regulated taxi sector, for Uber and its drivers, officially working independently, there are fewer governance arrangements. Several European countries have accepted and allowed Uber to operate, whereas others have experimented with and/or forbidden (forms of) Uber services.
However, regulatory arrangements for governing the use of the app have limitations. In many countries where Uber employs drivers, Uber is not formally based, and thus legally cannot be held accountable for conduct. As a result, the responsibility for the governance of road safety risks has become more fragmented; it lies at least partly with Uber drivers and end-users. Uber has vigorously claimed its app has technical features that stimulates safe driving. 5 For instance, it does not allow drivers to be logged in for more than a restricted number of consecutive hours. The company thus denies responsibility of stimulating individual "irresponsible" use even though it is part of the Uber business model that drivers continuously and actively engage with the app while driving, thereby distracting them. In the meantime, Uber drivers and users have only limited influence and options, primarily on the quality of one's own driving, and the choice to use or not use Uber.

Cross-Case Comparison
This section discusses the similarities and differences in the three different cases of risk personalization. We look at (i) the risk domain and sources of uncertainty, (ii) the governance arrangements, and (iii) the way in which emphasis is put on individual responsibility.

On Risks and Uncertainty
The three cases are situated in different risk domains (physical and environmental safety, and security risks), which all have different causes for uncertainty. In the nanomaterial case, the nature of risk is relatively straightforward. A physical material may or may not react with other physical materials and therefore pose (eco-) toxicological risks. In the social media case, collective risks emerge as a result of consumer and company behavior. Social media companies may or may not decide to profile and/or commodify personal data. Uncertainty has to do with a lack of knowledge about (future) company strategies and behavior, rather than not knowing physical properties, as is the case with nanomaterials. The cause of uncertainty is indeterminacy rather than a lack of scientific information (Felt et al., 2007). In the Uber case, indeterminacy of the app and its effect, or rather the unpredictability of user/driver behavior can be identified. Furthermore, the interaction with other drivers, pedestrians, and road users adds another level of complexity to the issue which makes the influence of the technology on road safety (the actual risk the technology brings) hard to establish.
The discussed risks also scale differently. For the nanomaterial case, the accumulation of nanomaterials in the environment is of most concern and the risk usually grows incrementally until a threshold is reached and toxic effects occur. In the Uber case, the influence on road safety is imminent as soon as the behavior of at least some Uber drivers becomes risky. By contrast, in the social media case, the collective risk does not seem to exist until a critical mass of information is shared online, which paves the way for new kinds of analysis and business models for social media companies.
Furthermore, the cases vary with respect to whether the collective risks are easily observable for individual consumers. In the Uber case, one may argue that driving behavior of Uber users is directly visible to the Uber user and other road users, although obvious limitations in terms of alertness exist. However, there is the problem of connecting individual observations and obtaining insight in large-scale, more systemic patterns of driving behavior. In the case of nanomaterials, the risks are even harder to observe. The individual cosmetics user may read a product label but is not necessarily aware of other nanomaterials present in his/her environment. The individual user can therefore not accurately assess whether the accumulation of nanomaterials actually happens. In the social media case, the analysis of data collected from users and inner workings of social media companies are notoriously intransparent. It is unclear what kind of analyses are made and what is done to the data, and thus whether and to what extent collective risks may occur. Negative collective risks resulting from patterns and effects often only become visible only via in-depth analysis of multiple incidents, which resulted in scandals (e.g., Cambridge Analytica influences on US presidential elections). However, the (negative) collective effects of nudging and more subtle behavioral influences resulting from uncontrolled and intransparent behavior of companies remain undetected.

Governance Arrangement for Collective Risks
Governance responses to collective risk and uncertainty in these cases vary as well. For social media and Uber, the uncertainty remained unknown until evidence of collective risk emerged gradually over time; once the technology became more widely diffused, the potential negative effects became more visible, feeding concerns over a number of incidents, indicating a potential pattern. In-depth research, which identified and reported these effects, resulted in public outrage and demands for a governance response. In the nanomaterial case, uncertainties and collective risks were anticipated from the start by experts, as the physical risks of miniaturization and "tampering with nature" were foregrounded, and the need for some form of governance arrangement to address collective risks was identified. This may have to do with nanomaterials being a paradigmatic example of an uncertainty and resulting risk that fits or at least seems to fit within existing risk management practices. However, the adequacy of the current governance arrangement in the form of product labeling is debatable.
In contrast to the nanomaterials case, in the social media and Uber cases, the need for governance arrangements was not immediately clear. Although nanomaterials carry associations with risk management of existing technologies such as nuclear power and GMOs, the emerging collective risks of digital platforms seem less well predictable, imaginable, or less threatening and only seem to be recognized in hindsight. In the meantime, the default personalized risk governance arrangement that allocates risk and the risk management responsibility to individual app users and Uber drivers is still in place pending resistance to change in the current governance arrangements.
The social media and Uber cases also demonstrate an asymmetry in capacity and resources to assess the (collective) risks resulting from new technology use. Governments are notoriously understaffed and lack important knowledge with respect to digital technologies. This information lag is reinforced by the notion that digital markets and tech companies are developing and evolving much faster than meaningful forms of regulation can respond. Furthermore, the cases point to an additional reason for the information asymmetry regarding public knowledge and data about collective risk(s): the strong link between the collective risk(s) and business models resulting from new technology use. The collective risk of profiling is not the result of a case of bad technology design, it is an essential feature of the current social media business model (and thus the result of deliberate technology design). Social media company revenues are primarily based on profiling, even though they portray themselves as platforms and enablers of communication to consumers. Uber does not profit from creating collective risks and safety hazards on roads per se. However, the Uber app does seem to directly affect driver behavior (looking at screens etc.) and not having that feature seems to dismiss the possibility of using the technology. Different app designs and incentives could in the future perhaps be developed to reduce collective risks.
The effectiveness of current individualized risk governance arrangements is thus questionable for different reasons. In the social media case, individual consent regarding privacy policies was put in place for digital technologies including social media, but many collective, societal risks resulting from social media use (e.g. fake news) have only recently been recognised. Profiling, commodification of data and associated manipulation are collective risks which have been identified but new governance arrangements have not (yet) been developed. These collective risks are seldom distinguished as typical problems resulting from large-scale use of social media and an inherent feature of the current business models that are sustained by the current governance arrangements. Uber had undertaken steps to improve road safety as a result of media coverage of complaints against the company. For instance, by developing features in their app to combat fatigue and make drivers more alert. However, whether the current risk personalization creates even more dependency on an app that by itself may pose collective risks is the way to go forward in managing them could be debated. In the nano case, risk experts and product developers have failed to develop alternative governance arrangements despite decades of controversy and public debate. The current risk personalization regime assumes that individual consumers can solve the questions and problems that other actors in the value chain cannot.

Allocations of Responsibility to Individuals
The cases show different ways in which the responsibility to manage collective risk is allocated to individuals. In all three cases, one may argue, customers have a key choice: they can choose to (not) use the product or technology. In addition, in the social media case a choice is offered to mitigate individual privacy risks. However, the drawbacks of this risk governance regime are well-known (e.g., Bechmann, 2014). The current regime does not include the collective risk of profiling. If these need to be added, one could well argue that individual Internet users will be flustered with so many decisions that the rational approach is to "click them away". Users must decide to accept cookies for each website, a habit of ignoring the messages will develop quickly. This may even have a negative effect on consent decisions on other topics.
In nanomaterials the mechanism is different, it is assumed that by offering nano-labeling information to individual consumers, the result is an informed choice. However, this choice, if made actively, is rendered meaningless by the absence of clear scientific information about whether nano-materials in the product are actually dangerous. This may even promote collective scares, such as the ones who have been seen in the European debate about GMO governance (Knowles, Moody, & McEachern, 2007). In the Uber case, there is no clear moment of choice for individuals to pose questions or deal with the (collective) risk (just the choice to use the service). Responsibility for road safety falls by default (or by omission) on individual Uber drivers and other road users, which suggests that these actors and the current organization of responsibilities are able and sufficient to cope with all the problems that emerge from new technology use.

Summary of Cross-Case Comparison
The similarities and differences between the three cases that were discussed in this section help to understand more precisely what is at stake with risk personalization. Table 1 summarizes the cross-case comparison. Again, it is stressed that the overview is not meant as an exhaustive list. In the next section, we revisit the concept of risk personalization based on the theory and the cases.

Discussion
This paper coined the concept risk personalization to start a conversation about a class of problems that emerges when the responsibility to govern uncertain collective risks is allocated to individuals. In present times, and against a background of fragmented governance, uncertain risks, and a general emphasis on individual responsibility, there are many instances of such allocation outside the explicit and purposeful arrangements of responsibilization.
Three diverse cases of risk personalization were analyzed to explore the phenomenon and its governance challenges. All cases represent individual responsibility in combination with aggregation of individual behavior into collective risk, while the cases differ in the risk domain and current governance arrangements. The conceptual lens of risk personalization helps to analyze how these cases vary in terms of the nature and scale of risks, some characterized by scientific uncertainty, and others by uncertainty about user and/or company behavior. Next, it identified variations in governance response to these risks, some emerging along the way and Labeling of nanocontent on product Implicit as part of use others anticipating uncertainty and risk from the start. The cases showed big differences in terms of capacity and resources to govern these risks. In some cases institutions are formed; in the case of social media institutional voids (rather than failing institutional responses) could be identified. Lastly, the risk personalization concept facilitated reflection on the allocation or at least part of the responsibility to govern these risks to individuals. In some cases, risk personalization was enacted more actively, e.g. by explicitly providing individuals with a choice to mitigate or "manage risk" whereas in others the phenomena was more implicitly present, for instance because of the absence of other governance arrangements.

Governance Challenges
This study also shows that there are several governance challenges associated with risk personalization. A first governance challenge related to risk personalization is that the choices presented to individuals do not always present a conscious choice on the full extent of the collective risk. Even if it could be assumed that individuals could obtain all the appropriate information that would allow them to make informed decisions, it is not self-evident that their decisions would "add up" and result in a fair or adequate balance between individual and collective risks. In fact, for individuals, the risk personalization scheme offers possibilities and incentives to free ride and thus engage in individual use of new technology even though that could produce significant collective risks (such as giving away data for better service), whereas on a collective level these individual sacrifices aggregate to socially undesirable consequences, such as profiling-based discrimination.
A second governance challenge has to do with whether individual actions can meaningfully contribute by their actions to resolving or preventing these collective risks. Does it really matter what an individual posts in the face of millions of people sharing personal information online? Does it really matter that one individual uses a product that may pose a minute uncertain risk for the environment? With Uber this is clearer; it does matter directly that individuals drive safely. Furthermore, at what cost would individuals mitigate collective risks, especially if the individuals themselves are not exposed to them? In the case of Uber, one may not use the app as a driver. However, given that most Uber drivers do not use it as a side-job, as was intended by Uber, but as a main source of income, this is a choice with big consequences. Alternatives for some nanomaterials in cosmetics are available but may reduce functionality. There are technological alternatives for products like Facebook, but they do not provide the same social setting and interactions, precisely because almost everyone uses Facebook.
From the discussion above, it appears that the governance challenges in relation to risk personalization are mainly related to the "quality" of the choices made by individuals. That is, risk personalization only makes sense as a governance regime if the nature of the risk, as well as the way in which the choices are presented to individuals, allow for sufficient quality of the aggregated individual decisions with respect to addressing the collective risk. This is clearly not always the case.
The third governance challenge has to do with the design of the governance arrangement itself. Risk personalization is a "choice architecture" imposed on individuals (Thaler & Sunstein, 2008). This study shows that more sensitivity to the underlying reasons are for personalizing the risk governance in the first place might be useful. A set of questions could aid an analysis: Does somebody benefit from personalizing risk? If there are reasonable collective decisions to be made, why bother individuals? A lot of individual time and effort may be wasted if individuals are asked to make decisions that might as well have been made at a collective level. Moreover, collective decision making capacity may have value in itself, and this may be weakened if too many risk decisions are personalized. (The current pandemic is a good example of the value of collective decision making capacity.)

Policy Implications
Having identified all these challenges, does this mean that society should refrain from the personalization of risks of new technologies or that all risks should be governed via collective arrangements? Not necessarily. On the one hand, there are some roles that individuals can play in the governance of collective risk. For instance, when experts disagree about the existence and scope of risks as is the case with nanomaterials, it seems reasonable to ask individuals (as citizens or consumers) whether they would want to be exposed to these uncertain risks (as a form of informed consent). That would indeed imply that not only users, but also those facing externalities get a say in this. Furthermore, if the collective risk is at least partly manageable by individuals making changes to their behaviors, like in the Uber case, risk personalization could help to address them as responsible agents.
On the contrary, collective arrangements such as regulation and national policies, have their own limitations. To name but a few, governments have limited policymaking capacity. Centralized top-down policymaking is well-known for its many flaws and increasing inefficiency in today's complex society. Similar criticism is voiced against public regulation, which is always lagging behind, and public agencies may lack the information or expertise to identify the collective risks. A collective response may help to deal with new and emerging risks, but also obscure some of the complexities and difficulties surrounding them.
The key message is that risk personalization is the result of a choice (or the absence thereof) to distribute responsibility for the governance of uncertain collective risks in a particular way. A discussion about this arrangement and the tensions it brings needs to take place among stakeholders. Although tensions between collective risk and individual decisions have always existed, these tensions seem to increase as a result of governance arrangements in several domains. The explicit identification of risk personalization as a phenomenon enables a discussion about possible alternative governance arrangements, with the possibility of allocating responsibility at the level where decisions are most effective and efficient. This study thus aspires to have provided a lens that enables stakeholders to discuss about risk personalization.

Academic Implications
The risk personalization concept enables a sharper analysis and discussion about collective risks and their management resulting from the use of new technology. It facilitates a discussion about the distribution of responsibilities of actor groups and considers the limitations of decision making for all actor groups, including individuals. The collective consequences of individual use of technology and their connection to the way decisions about such risks are allocated to individuals as consumers are poorly understood and seldom explicitly addressed, and consequently remain largely remain outside already politicized debates about governance of these technologies.
Therefore, more research is needed on when and under what conditions risk personalization is an appropriate risk governance arrangement, and how it compares to other risk governance arrangements. For instance, does the way in which collective risks develop (e.g., via accumulation of individual behavior, the interaction between multiple processes, or via the behavior of a single actor or group) influence the effectiveness of individualized governance strategies? Does risk personalization lead to shirking, e.g. because corporations and governments shift their responsibilities to individuals? Risk personalization does not just happen, but follows from deliberate business case designs. What are the roles and responsibilities of intermediaries (e.g., platform and product development) that create and frame risk management choices?
With this paper, and the introduction of the risk personalization concept, discussions about the relationship between different actors such as governments, individuals, and collectives, in the governance of uncertain and collective risk are placed back on the research agenda.