CAN A PREDICTED FUTURE STILL BE AN OPEN FUTURE? ALGORITHMIC FORECASTS AND ACTIONABILITY IN PRECISION MEDICINE *

The openness of the future is rightly considered one of the qualifying aspects of the temporality of modern society. The open future, which does not yet exist in the present, implies radical unpredictability. This article discusses how, in the last few centuries, the resulting uncertainty has been managed with probabilistic tools that compute present information about the future in a controlled way. The probabilistic approach has always been plagued by three fundamental problems: performativity, the need for individualization, and the opacity of predictions. We contrast this approach with recent forms of algorithmic forecasting, which seem to turn these problems into resources and produce an innovative form of prediction. But can a predicted future still be an open future? We explore this specific contemporary modality of historical futures by examining the recent debate about the notion of actionability in precision medicine, which focuses on a form of individualized prediction that enables direct intervention in the future it predicts.

only in the course of time and as a result of the actions that are performed and decisions that are made.The future becomes open in the sense that it is not determined: although it has its own structures that heavily constrain what can be realized, the future can still be shaped; it becomes a space of opportunities and possibilities. 9But precisely because of this, the open future is also unknowable in a much more radical way than in any previous society that relied on the temporal order of eternity, which was unknowable only to human beings because of their limited capabilities. 10ow is it possible, under these conditions, to build expectations for the future?How is it possible to have an orientation?The reflexivity of temporal horizons produces the specific form of uncertainty that characterizes the orientation to the future of modern society, which is expressed, first and foremost, in the explosion of contingency-the "it could also be otherwise" that makes any order provisional and any expectation about the future indeterminate.This contingency, according to Luhmann,11 is the "eigenvalue" of modern society; it also affects the search for necessary references and unquestionable orientations-and even this search and its results are now observed as contingent.From a sociological perspective, the spread of contingency is itself contingent; it may be (and has been) otherwise, and research should explore how our society got to this condition and what alternatives can be envisaged.Here, historical analysis is clearly indispensable.
Extensive research has shown that contingency is not a specifically modern category, and nor is the attitude that attempts to calculate and possibly exploit the contingency of the future. 12Contingency has been known since ancient times, and it has been known specifically in the study of what are now called alethic modalities: the necessary, the impossible, and different kinds of possibilities. 13From the very beginning, however, contingency was a strange and difficult modality.Whereas other modalities are defined univocally (the necessary is that which is always true, and the impossible is that which is never true), contingency is defined only negatively not by what it is but by what it is not: the contingent is neither impossible nor necessary.Not only that, but its definition combines two negations that cannot be reduced to each other.Contingency negates impossibility and necessity.A contingent given exists but can also not exist or can exist otherwise.From the perspective of classical metaphysics, this situation is unmanageable: according to the principle of the excluded middle, something either is 9. Risk sociology analyzes the social consequences (possible harms and opportunities) of this awareness; see Ulrich Beck, Risk Society: Towards a New Modernity, transl.Mark Ritter (London: Sage, 1992) and Luhmann, Soziologie des Risikos.
or is not, although this may be difficult to know, and one may be mistaken.An utterance is either true or false-if one negates its truth, one gets a falsity, and vice versa.If one negates contingency, however, it is not clear what one getsimpossibility or necessity.The result is inherently ambiguous and difficult to deal with. 14The choice, then, was to not deal with it at all.Systems of modal logic still do not deal with contingency, which cannot be formalized and on which no calculus can be developed. 15his is also true in a temporal perspective, wherein contingency concerns the future that cannot yet be observed in the present.Consider Aristotle's famous reflection on contingent futures: as we cannot observe the future naval battle today, we cannot yet say whether an utterance about the battle is or is not true.Yet this did not imply a contingent world or doubts about the necessary order of nature: "it is necessary for there to be or not to be a sea-battle tomorrow; but it is not necessary for a sea-battle to take place tomorrow, nor for one not to take place-though it is necessary for one to take place or not to take place." 16Although the occurrence of the future battle is contingent, the alternative between true and not true is necessary: one or the other value will be assigned.Human beings do not yet know which, so they must refrain from making judgments.They can only observe their inability to observe, 17 as was the case in the following centuries in discussions about notions such as hazard, chance, luck, adventure, fate, or destiny 18 and in the complex theological debate about the figure of God as the possible necessary cause of the contingency of the world. 19n this approach, contingency was recognized and could be observed, but one had no indications of how to deal with it-that is, other than knowing that one 14.In Luhmann's terms, referring to Edmund Husserl, the negation of contingency lacks in technicalization; see Niklas Luhmann, Theory of Society, vol. 1, transl.Rhodes Barrett (Stanford: Stanford University Press, 2012), 317.See also Elena Esposito, "Die Selbst-Falsifizierung der Technik und ihre Rätsel," in Technik in Dystopien: Jahrbuch Literatur und Politik, vol. 7, ed.Viviana Chilese and Heinz-Peter Preusser (Heidelberg: Winter, 2013), 63-74.15.See, for example, the discussion about the problem of finding a "logic for contingent beings" in Arthur N. Prior's Time and Modality (Oxford: Oxford University Press, 1957), 155; Prior remarked that "modal logic is haunted by the myth that whatever exists exists necessarily" (ibid., 48).See also Harry Deutsch, "Contingency and Modal Logic," Philosophical Studies 60, no.1-2 (1990), 89.
19. See Luhmann, "Kontingenz als Eigenwert der modernen Gesellschaft," 107, with reference to Thomas Aquinas, Summa Theologiae 1.14.13.The theological speculations about divine foreknowledge, human freedom, and the determinism of the future, which were built on the Aristotelian passages on future contingencies and had pervaded all of medieval theology, underwent a new twist in late sixteenth-and seventeenth-century Jesuit scholasticism.Via the two competing doctrines of necessitas moralis and scientia media, a heightened sensitivity to human agency was integrated into a system in which the notion of an omniscient observer was attenuated but could not yet be abandoned.Significantly, both of the aforementioned intellectual endeavors arrived at solutions that prefigured modern ways of dealing with the uncertain future: statistical thinking in the case of moral possibility and a shift of focus from what is actually future to what is only conditionally future in the case of middle knowledge.For more on this, see Klaus Reinhardt, Pedro Luis SJ (1538-1602)  could not know what would happen.On the other hand, such indications were not needed, because, from the premodern perspective, contingent futures were not a matter on which human beings could act.The necessary order was attributed to the world, not to observers.There was nothing they could purposely do that would make a difference in the contingency of the future.Humans only had to deal with the impossibility of knowing in the present what would happen later-although they could perhaps resort to divinatory procedures that could give limited and always obscure indications of the still invisible future. 20The observers could only prepare for a future that did not depend on their plans and goals.
Everything changed with the modern approach, whereby uncertainty began to refer to the decisions of observers, and decisions became a problem.Since then, the matter is no longer the insolvable question of whether the naval battle will happen but whether one should risk making it happen.Here is the disruptive significance of Koselleck's fundamental divergence of experience space and expectation horizon-that is, of the open future.Nothing we have learned from the past and can know in the present provides us with certain indications of what to expect for the future, because there is no necessary structure that gives order to social contingency: how the future will come about depends on what the agents decide and do today when facing this uncertainty. 21Underneath contingency there is only further contingency: the contingency of the future depends on the contingency of present behavior, and nothing of the future can be seen, not even in part, before the future has become present.Yet we have to decide, and the search for criteria shifts to the structures of contingency, focusing not on the world but on the observers and their decisions.
The distinction between observation of the world and observation of other observers who themselves observe the objects in their worlds was introduced by Heinz von Foerster 22 as the distinction between first-order observation and second-order observation.The modern understanding of contingency is strictly connected with second-order observation: each observer faces a world in which there are other observers who refer to their own worlds, within which their perspective is only one among others-so, like the others, their own perspective could be otherwise.Then, indeed, contingency becomes rampant: "everything becomes contingent when what is observed depends on who is observed" 23 -a subtle but fundamental difference from the classic formula of subjectivism, according to which "everything said is said by an observer," 24 who can observe, in their own (contingent) way, the order of the world.Now, everything depends not on the observer who observes but, first and foremost, on the observer who is observed-that is, it depends on a contingent perspective on contingency.
While subjectivism often leads to some form of "anything goes," in the perspective of second-order observation, the uncertainty becomes not only much more radical but also operationalizable.In a modern society characterized by second-order observation, contingency becomes a problem to be managed.The connections between contingencies are the foundation of the structures of function systems in modern society.In the perspective of systems theory, the differentiation of modern society in function systems-which is the basic innovation of modernity 25 -is connected with a shift to second-order observation.In each of the autonomous domains of modern society, communication is based on the observation of observers.In science, for example, publications make it possible to observe how others observe and to expose one's own observation to the observation of others; in the economy, one observes how other observers (competitors or buyers) are observing by observing prices and their variations; politics observes in the mirror of public opinion how it is observed; and the law is observed in the form of judicial decisions that might be different but not arbitrary.Contingency is the eigenvalue of modern society because it is the foundation of the structures that direct the operations of communication, which all rely on second-order observation.

III. PROBABILISTIC FUTURES
How do we manage the complexity of these relationships?What is the point of all this contingency, Luhmann has asked, 26 if we are simply exposed to it and do not know how to organize and use it?Is it possible to utilize it in communications and decisions?The modern contingency associated with the open future produces great uncertainty, but it also generates new opportunities.The future is unknowable because it is as yet undetermined-but precisely because of this, it can be acted upon, and present decisions make a difference.The future is built in the present, even and precisely if we cannot know how it will come about.Whether there will be a naval battle depends on present decisions, though probably not along the lines that the decision makers expected.But how does one decide how to proceed?Obviously, the complexity of the open future is highly difficult to manage, and the orientation to the past does not provide criteria for deciding.For some centuries, then, the search for criteria has turned to the future 27 -indeed, it has turned to a second-order observation of the various possible futures that can be imagined (projected) by observers in the present. 28he tool to manage and formalize these possibilities is the calculus of probabilities, which became the basis of the form of prediction developed by modern society: projecting possible future scenarios and calculating their probability/improbability in order to make decisions in a way that is not arbitrary and that all observers recognize as being rational. 29Although probability calculus cannot determine today whether a future event will come about, it provides an orientation by measuring (at 37 percent, 71 percent) the present observers' ignorance/knowledge about that event on the basis of the available information.It does not tell us what will happen, but it does convey how much the observers know (and do not know) about it.The uncertainty remains, but the probability-oriented decision makers can claim that they calculated correctly and behaved rationally-although things can always turn out differently, and one knows it.And one knows that one will be able to claim this also in retrospect, even if the prediction is disproven: it was the prediction that was wrong, not the decision, which can be assumed to have been as rational as possible.
In this perspective, the approach to decisions and the possibility of acting on the future in a controlled way do not contradict its openness.According to Koselleck, "since the future of modern history opens itself as the unknown, it becomes plannable." 30Probabilistic forecasting, the tool to keep planning in a world that has moved beyond the Enlightenment belief in progress, actually does not predict the future at all; it only deals with a present given: the information available to the observer at the moment and the current projections about the future.It deals with the "present future," while the real future given will be the "future present" that will come about at a later point in time. 31It will come about not in probabilistic percentages at 37 percent or 71 percent but as 0 or as 1, and it cannot be known and calculated in advance.Probabilistic procedures make it possible to calculate these present projections, thereby providing decision makers with reliable criteria for managing their present uncertainty with respect to the future-and they are able to do so precisely because they do not claim to predict it.Our society calculates probabilities and makes plans, but it does not have a genuine prediction of the future or a calculus of contingency.

IV. THE REDISCOVERY OF THE FUTURE PRESENT
In the last few decades, a growing dissatisfaction with the established management of future uncertainty, and particularly the probabilistic approach, has been 29.Lorraine Daston, Classical Probability in the Enlightenment (Princeton: Princeton University Press, 1988).30.Koselleck, "Historia Magistra Vitae," 39. 31.Niklas Luhmann, "The Future Cannot Begin: Temporal Structures in Modern Society," Social Research 43, no. 1 (1976), 130-52.The duplication in the future present and the present future reproduces, in the temporal dimension, the duplication that characterizes contingency: a datum that cannot be determined in only one dimension and always points to an additional perspective, which is different but implied.And, like contingency, it cannot be calculated.For the link between the distinction and the "knowledge registers of historical futures," see Simon and Tamm, "Historical Futures," 11-12, 21.
spreading. 32Dealing with issues such as climate change and environmental risk, medical emergencies (for example, pandemics and antibiotic resistance), and the use of complex technical innovations that disclose their consequences only after the fact, the knowledge register of the present future no longer seems sufficient.Rather, there is a call for attention to the future present, a call that is expressed in the form of concerns about future generations and a generalization of the precautionary principle. 33This is particularly true of events labeled as catastrophes: injuries that partake "of the irreparable, the irremediable, the incompensable, the unpardonable, the nonprescriptive" 34 -future events that should be avoided in any case, no matter how low their probability of occurring is.In the face of catastrophes, the criteria and rationality model of probability calculus run up against their limitations: decision makers can no longer trust that their decisions are and will be considered correct if the future differs from the prediction.It is not enough to refer to the present future accumulating all available information; the future present, with all its irreducible uncertainty, must be taken into account.
In the probability calculus referring to the observation of observers, the discrepancy of the assessment before and after the event 35 is neutralized.Even if things turn out differently from the prediction, decision makers can still argue that they made the most rational choice given the available information and not change their assessment.In the event of catastrophes, instead, the difference between before and after re-emerges as "anticipation of a retrospective re-valuation." 36That others agree on the rationality of decisions is not enough; the world breaks in with all its weight and one is forced to consider the world and its impact.Second-order observation must also refer to the unobservable future present.Today, we already know that the decision that leads to catastrophic events will be unacceptable, even if it was made in an informed and controlled manner.An ancient category re-emerges, a category that is seemingly incompatible with the orientation to the open future: the idea of destiny.The recent concerns at the center of the Fridays for Future environmental movement 37 and the call for responsibility for future generations are, in fact, based on what Luhmann called the "prospect of an unamenable destiny," 38 which, as in the case of ancient tragic heroes such as Oedipus, is realized precisely by the behavior of the actors.The difference, however, is that, today, we know already at the moment of decision that the open future depends on our behavior.Whereas ancient heroes discovered a posteriori that they had fulfilled their destiny while trying to avoid it, we are aware of it a priori. 39We still don't know the future, and nevertheless, we have to decide and produce it.
V. ALGORITHMIC FUTURE How do we decide?Today, we can apparently use the predictions of algorithms that can supposedly eliminate the uncertainty of the future and provide accurate and reliable indications of what will happen.In his book The Algorithm and the Oracle, Alessandro Vespignani, a highly reputed expert in the field of computational epidemiology, wrote that "algorithms are close to fulfilling the desire for a secret-free, completely predictable tomorrow." 40Pedro Domingos, one of the leading researchers in machine learning and inference under uncertainty, has claimed that an "ultimate Master Algorithm," an algorithm that "can derive all knowledge in the world-past, present, and future-from data," will soon be produced. 41Similar claims are widespread in the debate about recent machine learning techniques, which have been developed in the last ten to fifteen years using big data. 42hese claims, which seem completely implausible in our society oriented to an open future, are based on a shift in perspective that marks a radical difference from statistical procedures. 43Although algorithmic techniques are a development of traditional statistical techniques, 44 the introduction of advanced forms of machine learning and the use of big data marked a caesura that is increasingly separating statistical programming and algorithmic programming.In fact, if one examines in detail the way algorithmic predictions operate, their claims appear different from traditional probabilistic ones.Whereas probabilistic techniques are aimed at the present future and at the information available to human observers 38.Luhmann, "Die Beschreibung der Zukunft," 147: "Aussicht auf ein indisponibles Schicksal."39.In this regard, the concept of "unintended consequences" that Ankersmit proposed to integrate into historical thinking (see "The Thorn of History," which is his contribution to the "Historical Futures" project) is itself a genuinely modern representation of the same model.When the future opens up, we know that our actions will have consequences that we cannot yet foresee; at the same time, there is no prophecy to be fulfilled-just interests and degrees of rationality.44.As Simon and Tamm have stated, "the new futures may not replace the old ones, but they may coexist with them in complex constellations" ("Historical Futures," 4).
today, algorithms focus on the future present.Whereas probabilities disclose the structure of the present observation of the future, algorithms do not refer to observations and claim to directly disclose the structures of the future.Since the future does not yet exist, of course the prediction can always be disconfirmed, but it still provides different kinds of insights, ones that go beyond what observers know and can observe now.
There are some basic differences between statistical and algorithmic techniques.First of all, there is the kind of data that is used.Instead of working with limited amounts of controlled and accurately sampled data, algorithms work with huge amounts of unselected and uncontrolled (big) data and look for patterns: 45 regularities and configurations that are not necessarily probable and do not refer to what observers know or can understand today-and therefore are often obscure. 46f they are used for prediction, they refer directly to the future.
The result, indeed, are forecasts that are fundamentally different from the forecasts with which we are familiar.Rather than modern probabilistic rationality, algorithmic predictions resemble the logic of the management of the future of premodern societies, 47 a logic that has been discredited and marginalized since the eighteenth century 48 and was expressed in divinatory procedures. 49he similarity between algorithmic forecasts and divination can be seen in at least three respects: they are performative, they are individualized, and they are opaque.
Divinatory procedures were explicitly performative.The self-fulfilling character of oracle predictions is well known; think about ordeals or about the story of Oedipus, who fulfilled the prognosis about his future precisely by doing everything possible to avoid it. 50They could also be self-defeating, because they could induce human beings to intervene and change the course of the future; apotropaic rituals tried to affect a revision of the individual fate. 51Also, as Dominique Cardon has observed, 52 with their operations, algorithms "manufacture" the future they anticipate, performatively affecting the future they predict. 53The algorithms used in predictive shopping, for example, suggest to individual customers the specific products they expect the customers will be willing to buy; the algorithms do so even before the individuals themselves choose the products, and possibly before they become aware of their need.If the customer buys the product, the selffulfilling prediction produces the future it predicted.If the customer does not buy the proposed product, the algorithm learns from this experience and improves its performance.But the forecast can also activate an action to hinder the undesired future and become self-defeating.In crime prevention, for example, algorithms claim to identify criminal activities before they are performed, and the prediction is expected to make it possible to act before an individual begins a criminal career. 54 second feature of divinatory responses is that they always referred to singular events or individuals: Is it advisable to start the battle tomorrow?Will the divinatory subject's wife give birth to a male child after having so many female children?"Will my child have a big nose?" 55Algorithmic predictions do the same: they do not indicate probable trends in the population but instead give precise indications for a specific case.According to Eric Siegel, "individualization trumps universals": "Whereas forecasting estimates the total number of ice cream cones to be purchased next month in Nebraska, predictive technology tells you which individual Nebraskans are most likely to be seen with cone in hand." 56inally, oracular and divinatory responses were and had to be mysterious and enigmatic.Divination provided insights into the order of the cosmos and the superior perspective reserved for entities higher than humans; understanding the underlying logic not only was impossible but would have been unacceptable hubris. 57Similarly, the opacity of the latest and most refined deep learning algorithms is well known and much discussed. 58The procedures leading to algorithmic predictions are often incomprehensible to human observers, including the very programmers who designed them; this is what the recent research direction of "explainable AI" is concerned with. 59hese features are not entirely new.They are also present in probabilistic predictions and have often been identified and discussed-but they have been treated as problems, not as opportunities.At least since the start of the debate about self-fulfilling prophecies, 60 the circularity of predictions has been a thorny problem for sociologists, 61 economists, 62 philosophers of science, 63 and pollsters. 64That statistics do not give indications for individuals (no one has 1.4 children), moreover, has often been criticized as one of its liabilities: probabilistic procedures only give indications about populations, while individual reference is often what we really care about.And statistical predictions are also frequently opaque to humans, who do not normally reason in probabilistic terms and do not interpret them correctly. 65n algorithmic prediction, on the contrary, these features become the starting points for an innovative management of future uncertainty. 66Performativity is not a hindrance but a feature to be actively exploited in the genesis of the forecast.The goal is no longer truth but performativity.According to Bernhard Rieder, "we no longer (only) decide based on what we know; we know based on the decision[s] we have to make." 67 Expectations about our own behavior are thus simultaneously descriptive and prescriptive in nature.Individual reference becomes the starting point of algorithmic prediction: the forecast applies to a specific context and a specific dataset-if a prediction is needed for a different individual, one has to start over again. 68Learning algorithms are extremely effective and can achieve impressive results, but their results refer only to the specific case for which they have been trained.That the results are local, specific, and provisional is their strength.In the words of Andy Clark, "context, it seems, is everything." 69And from a computational point of view, the opacity of the procedures that generate the prediction, an aspect that is obviously very problematic for their acceptance by the public, is not a problem but, paradoxically, an advantage: by disengaging from the burden of comprehensibility, the prediction can be much more accurate and efficient. 70

VI. ACTIONABILITY IN PRECISION MEDICINE
As Luhmann observed long before the recent appearance of advanced algorithms, the increasingly sharp break between the present future and the future presents does not necessarily exclude forecasts, 71 but they have a different form and different problems than our familiar ones do.In the final part of this contribution, we provide some tentative insights into the use of algorithmic predictions in an area of our society where forecasting techniques are of central importance: the field of medicine and, in particular, the emerging field of precision medicine. 72Precision medicine, like algorithmic data management in general, 73 uses big data and machine learning to measure, in a continuous way, many different kinds of variables (ranging from the molecular to the social) and generate a highly detailed description of an individual patient's genotype and phenotype.In this way, advocates of precision medicine promise that highly accurate diagnoses and prognoses can be produced. 74Our hypothesis is that these kinds of predictions combine the three specific features of algorithmic forecast (performativity, individual reference, and opacity) and give rise to an innovative way to manage the uncertainty of the future, developing, in an unprecedented way, the dependence of the open future on present decisions and actions.
The debate in precision medicine is focused on the notion of actionability.The term was introduced in 2005 and has spread enormously over the past ten years, becoming the central element of a new "biomedical regime" 75 and "redefin[ing] the significance of the prediction and thus its clinical utility." 76The label of "actionability" is used to qualify a genetic finding that is both (likely) pathogenic and eligible for treatment.In this sense, medicine based on actionability can be conceived of as a particular practice, one that is "aim[ed] explicitly at anticipating the shape of things to come, effectively bringing about their desired futures or avoiding undesired ones." 77The diagnostic analysis is thus combined with the therapeutic recommendation to form a single binary indicator (actionable or not actionable).Algorithms do not "make" medical decisions, of course, but they mark the spot at which a decision has to be taken, integrating the alterability of the future into their predictive analysis.
The result is a new form of prediction, giving rise to a cascade of consequences.First of all, prediction changes because the focus shifts from describing the future to intervening in the future: actionable prediction is "information that medical practitioners are able (or obliged) to act on and put into practice." 78It is an approach that, in our terms, takes advantage of the performative component of algorithmic prediction.Knowing precise aspects of the structure of the future is beneficial only when the predictions can be acted upon-that is, when they are actionable.As already noted in Barack Obama's inaugural speech for the US Precision Medicine Initiative in 2015, 79 the very promise of precision medicine is to deliver the right treatment for the right patient, not merely to predict the future.Simply knowing when one is going to die resembles a dystopian scenario such as the one conceived by Elias Canetti in his play The Numbered, 80 whose characters are named by the age at which they will die.Against such a dystopian setting, precision medicine strives to offer a new kind of management of the future that corresponds to the possibility of intervening in the future present. 81his breakthrough is related to (and simultaneous with) the spread of next generation sequencing (NGS) in oncology, which has made genomic sequencing cheaper, faster, more accurate, and accessible to a wider range of patients. 82As a consequence, a detailed exploration of the human genome became possible; in oncology, this has radically changed the previous approach, which was based on the genetic analysis of hereditary mutations in (still) asymptomatic healthy individuals and which used (and still uses) probabilistic techniques to guide the present observation of the future.The goal was to produce a prognosis and identify the risk of developing a disease for groups of individuals with the same mutation.Today, instead, NGS aims to identify nonheritable (sporadic) mutations that arise in specific individuals who are already ill.Whereas, in the twentiethcentury imagination, the individuals' futures were "written" in their genes but could not be read, their futures are now readable, and algorithmic techniques are used not to predict the future but to intervene in the present and change its course.This is what actionability is about: moving from the diagnosis and prognosis of diseases to direct intervention in their future evolution in specific cases.
In this context, the concept of actionability responds practically to the longheld suspicion that the "geneticization" of disease would push the widespread pathologization of the population to the extreme. 83Indeed, the recognition that we are all potentially ill is confirmed at the molecular level by the fact that polymorphisms are found in every human genome.In the context of predictive medicine, though, it is not these potential futures that play the decisive role but instead those that can be actively produced with a sufficient degree of certaintythat is, where this degree of certainty not only is an index of the prognostic level of confidence (as in probability calculus) but also integrates into the calculation the effectiveness of the remedies used.According to Tanya Stivers and Stefan Timmermans, "actionability rests both on the biological characteristics of genomic results and on the ways that clinicians . . .marshal these findings." 84vidence for possible interventions is based on bodily features, such as genetic mutations, that are called biomarkers.A working group of the US National Institutes of Health Director's Initiative on Biomarkers and Surrogate Endpoints defined a biomarker as "a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention." 85The employment of measurable signs in medical practice is a long-standing achievement in the field.However, in the second half of the twentieth century, the growth of biomarker research was fostered by the diffusion of clinical laboratories. 86The word "biomarker" was first used in the literature in the late 1970s; in particular, it appeared in the context of an article whose authors were measuring the levels of serum ribonuclease in oncologic patients with multiple myeloma. 87Subsequently, during the 1990s and early 2000s, genome mapping started to drive biomarker research due to the implementation of the Human Genome Project and the subsequent price reduction of genetic analysis. 88Biomarkers, which have become essential to "feeding" the algorithms, are quantifiable variables that can be seen either as proxies of present and future health states or as indications of the effectiveness of treatment.
The most advanced genomic analysis has led to the discovery of different types of biomarkers, which have been classified into categories based on their functions-each of which in turn relates to the future differently.For instance, in addition to the classic diagnostic biomarkers (which identify, in the present, the characteristics of a disease) and prognostic biomarkers (which forecast the probable evolution of the disease in the future), sequencing enables the identification of predictive biomarkers, or mutations that indicate the likelihood that an individual will (or will not) respond to a particular therapy. 89Actionability 90 is based mostly on these. 91Predictive biomarkers actually provide a very strange prediction because they say nothing about the future as such; that is, they give no indication of what will happen. 92They indicate what can be made to happen.The presence of a predictive biomarker per se does not allow for any prognosis, because it has nothing to do with the likelihood that an individual will develop a disease.Instead, it predicts something very different: the targeted efficacy of a possible therapy-that is, of an intervention in the future 93 -if we accomplish it.If an individual has a certain predictive biomarker, it is very likely that a present action (a therapeutic measure or the administration of a targeted drug) will make therapy more effective for a specific disease.88.Carini, Seyhan, Fidock, and van Gool, "Definitions and Conceptual Framework."89.Helena Verdaguer, Tamara Saurí, and Teresa Macarulla, "Predictive and Prognostic Biomarkers in Personalized Gastrointestinal Cancer Treatment," Journal of Gastrointestinal Oncology 8, no. 3 (2017), 405-17.In addition to these, there are other biomarker categories.The BEST (Biomarkers, EndpointS, and other Tools) Resource, a glossary developed jointly by the FDA and the NIH with the primary aim of clarifying and harmonizing the use of terms related to biomarkers and endpoints in medical product development, recognizes seven of them (susceptibility/risk, diagnostic, monitoring, prognostic, predictive, pharmacodynamic/response, and safety biomarkers), which are not always mutually exclusive.Biomarkers are not predictive, diagnostic, and so on, per se.The same biomarker, such as a gene mutation, can be classified under one or more categories according to its significance for a purpose (for example, diagnosis, treatment, et cetera).Following the BEST Resource, mutations to BRCA1 and BRCA2 genes can be considered as risk, prognostic, or predictive biomarkers: BRCA1/2 mutations are known to predispose a healthy individual to the risk of breast cancer, can indicate the likelihood of breast cancer recurrence in oncology patients, and can be used to identify patients who are suitable for treatments based on poly(ADP-Ribose) polymerase (PARP) inhibitors.
90. Actionability itself is divided into more specific categories, such as "druggability," which refers to a situation in which an approved drug is available to act on the specific future at stake.91.Xavier Guchet, La médecine personnalisée: Un essai philosophique (Paris: Les Belles Lettres, 2016), 34-35, 376-87.92 Whereas the transition to modernity replaced the older medical focus on prognostication of the Hippocratic traditions with the primacy of diagnosis, in precision medicine, the focus of interest is neither the prognostic nor the diagnostic but the predictive power of medical analysis.The biomarker gives indications about the future conditioned by our action-that is, about the actionability of the future related to a given disease for a given individual.It tells us not what the future will look like but what we can do today to act on it in a specific case.Since its beginnings, medicine has been susceptible to the suspicion that its drugs are the poison, and it has long been known that many cancer therapies are not only harmful but carcinogenic.Now, not only primary therapeutic effects but also acquired resistances (that is, resistances to a particular medication that develop in response to an individual taking that same medication) can be predicted with the help of biomarkers.
The relationship to the future, in this new approach, is driven not by a statistical prognosis referring abstractly to the future of a larger or smaller population of patients but by the actionability of a prediction concerning an individual patient in a precise present; and the uncertainty refers not to the observer's degree of ignorance with respect to the future (measured by a probability) but to different tiers, at decreasing levels of clinical certainty, that correspond to the actionability of the mutation involved.At Tier 1A, an approved drug is available to act on the mutation; at Tier 1B, the drug is experimental; at Tier 2, there are drugs targeting molecules in the same biological pathway; and Tier 3 encompasses mutations "of unknown significance." 94ctionability, in essence, puts the performativity of prediction into practice to carry out controlled and targeted intervention in the future present, and it does so with an approach that explicitly departs from the (probabilistic) attempt to describe the future and its scenarios in general.As Nicole C. Nelson, Peter Keating, and Alberto Cambrosio have argued, "intervention takes precedence over representation." 95It is not a matter of classifying diseases according to increasingly refined nosological categories and making related prognoses; rather, it is a matter of acting directly on the development of the disease in a specific case that does not perfectly match any classification (since each individual disease has specific traits).

VII. SINGULARIZATION
The use of digital techniques and the new kind of data provided by genomic sequencing allow for performative predictions that can affect the configuration of the future-a future, however, that involves specific individuals rather than extended groups of patients.Actionability in medicine explores new predictive practices that are related to singularization 96 -that is, practices that are related to the second feature of algorithmic prediction mentioned above, individual reference.
The notion of singularization is part of the debate about generalization and personalization that has always accompanied the projects of what is now called "precision medicine" but was announced as "personalized medicine": the promise of offering "the right treatment for the right person at the right time." 97 Personalization was expected to overcome the current generalized "one-sizefits-all" approach, which, like all statistically based procedures, is accurate only with respect to the mean values of a population but is often ineffective or even harmful for individuals. 98However, it quickly became clear that algorithmic personalization is not inherently personal 99 in the sense that it does not take into consideration the uniqueness and unrepeatability of individuals.Algorithms do not know and recognize people; instead, they combine a quantity of different data until a single reference is identified.Algorithms work completely anonymously and impersonally, even if they focus on a single individual.
In the medical field, this has resulted in a shift from the idea of personalization to the idea of precision and then to practices of singularization that explicitly reject the opposition between the general and the particular and that "refer, rather than to unique individuals, to a peculiar combination of a set of (in our case: mostly molecular) traits that are neither unique nor exclusive." 100The new form of prediction works with a multiplicity of traits (gene variants) that are not unique to a patient-and, in this sense, are not individual-but whose combination specifically characterizes that patient.It is at this singular level that prediction can be actionable.Whereas randomized controlled trials (RCTs) mostly target a single disease or health condition (for example, aiming to test the average efficacy of a specific treatment for a pathology), algorithms are concerned with the uniqueness of the patients.Whereas individual heterogeneity was a hurdle for evidence-based medicine, it has become an asset for precision medicine. 101he consequences for medical practice are extensive and fit into a debate about the discipline's identity that has remained open since modernity.The status of medicine as a science has always been challenged by its peculiar and ambiguous position between the general reference that is typical of modern science and the individual reference that is unavoidable for the clinic. 102It was this hybrid status that had confined statistical methods to the "peripheries" of medicine-namely, to epidemiology and drug research that could not deal with case-specific judgment and therapy-since mean values and variances were of little help to the individual patient.Medicine was understood not as a science but as an art in the Greek and Latin tradition of techné: the skillful application of rules gained from experience to the individual case.When, in the early nineteenth century, Pierre Louis and Jean-Baptiste Bouillaud proposed a "numerical method" for the art of medicine, they were told that their proposal would turn medicine into a lottery. 103Today, through algorithmic singularization, actionability is impacting the clinical art and having far-reaching effects on clinical routines, clinical trials, the relationship between pathologists and medical oncologists, and the very notion of disease.People speak of a "molecularization of medicine" 104 and of a "clinic of variants." 105he undeniable scientific relevance of these developments has led, since the publication of a 2018 article by Alberto Cambrosio et al., 106 to talk of a progressive blurring of the border between research and the clinic.

VIII. OPACITY
Precision medicine is fundamentally molecular medicine, for it relies on the general assumptions that genotypic changes lead to phenotypic pathogenesis and that the intervention acts on the process that occurs between the two.But molecular processes are indescribably complex; indeed, they involve interactions among so many genetic factors that they cannot be deterministically calculated. 107Their management requires processing with algorithmic tools that are themselves so complex that they are nearly impossible to understand.Algorithmic opacity appears to be the appropriate means of dealing with genetic opacity.The prerequisite of the practices of precision medicine, therefore, is the acceptance of opacity, which we have identified as the third feature of algorithmic prediction and which also affects the relationship with scientific research.Oncologists working in the new biomedical regime have to make decisions under an unprecedented condition of uncertainty, because they rely not on hypotheses that have already been tested and established by research (a "diagnostic ontology" 108 ) but on the opaque procedures of algorithms whose steps are not fully understood.
The actionable cues provided by the processing of genetic data lead to the implementation of interventions that generate a posteriori hypotheses-that is, the explanation for the intervention."Biomarker-driven clinical trials," 109 for example, are not driven by a theoretical hypothesis that has been confirmed by previous clinical trials, which oncologists assume and understand; rather, they start with a specific actionable mutation and are used to design a trial that tests an experimental drug targeting that mutation.The hypothesis, if it is produced, emerges as a result of the intervention.Rightly, Nelson, Keating, and Cambrosio have cited Michel Callon's claim that, in contemporary technoscience, "we intervene in order to know, more than trying to create knowledge in order to intervene." 110

IX. CONCLUSIONS
The consequence of the introduction of algorithmic practices in the medical field is a novel form of prediction that is compatible both with the claim that it provides precise and accurate directions to manage the uncertainty of the future and with the insurmountable unknowability of the open future of modern society.The future present is unknowable because it depends on our intervention-but our intervention can act on the future present.Algorithms provide the form of anticipatory practice that enables us to do this in a nonarbitrary and controlled way and to learn from the outcome when the future becomes present.
Referring once more to Luhmann, we contend that the value of the prognoses that can be made about the open future resides not in the fact that they disclose, in advance, what will concretely happen-which is and remains impossiblebut "only in the rapidity with which they can be corrected and in knowing what is important in this context.They are therefore only 'provisional' forecasts, and their value lies not in the certainty it affords, but in the rapid and specific adjustment to a reality that turns out differently than one had expected." 111Prediction in this sense-as a contemporary modality of "historical futures"-is useful not to see the future that is not yet there, or even to produce it according to our wishes, but to be ready to change it continuously, starting from our interventions in it.The present future remains open but can be used as an orientation.With the support of tools that don't perceive contingency but can organize it and give indications for action, contingency can be used without reducing it.ACKNOWLEDGMENT Open access funding enabled and organized by Projekt DEAL.
111.Luhmann, "Die Beschreibung der Zukunft," 140-41.As always when it comes to contingency, negation becomes complex: by negating uncertainty-as algorithms do when claiming to eliminate it-one does not actually get certainty; rather, one acquires a task to be performed in order to get useful indications.
. Konstantinos Sechidis et al., "Distinguishing Prognostic and Predictive Biomarkers: An Information Theoretic Approach," Bioinformatics 34, no.19 (2018), 3365-76.93.Predictive biomarkers are employed to provide conditional predictions (or projections).They are intrinsically tied to their actionability, since they indicate the modal structure of the future present as it would become when acted upon.Other biomarkers (for example, prognostic and diagnostic ones) by definition do not depend on human interventions.See S. Andrew Schroeder, "How to Interpret Covid-19 Predictions: Reassessing the IHME's Model," Philosophy of Medicine 2, no. 1 (2021), 1-7.CAN A PREDICTED FUTURE STILL BE AN OPEN FUTURE?