Ontological unpredictability: what can realists say about unpredictability, contingency and catastrophe?

This paper introduces the original research articles that constitute the present Forum issue on unpredictability, contingency and catastrophe. In doing so, it also identifies and discusses the specificity of realist approaches to the above questions. It is argued that attentiveness to the ontological dimension of (un)predictability opens promising avenues for reflexive approaches to social science and collective action.

The idea of this Forum issue emerged in Autumn 2020 between members of the Centre for Social Ontology, a group of social theorists who share an interest in realist social theory.As the catastrophic Covid-19 pandemic was still raging around the planet, we felt torn between two duties.On one hand, we felt an obligation as intellectuals and University academics to make whatever contribution we could to fighting the pandemic; our most distinctive weapons being patient reading, sound reflection, truthful discussion and careful writing.But on the other hand, there was already a flood of texts dedicated to the COVID-19 pandemics from a baffling variety of perspectives, and we had reasons to suspect that while more were yet to sprout, most would be forgotten as quickly as they had been written and published.Although these writings differed widely in terms of their perspectives and conclusions, taken together they produced a strong collective sense that an almost unavoidable catastrophe had just happened, and that the destiny of large sections of the world population depended quite abruptly on events as contingent as the contact of a bat and a pangolin in a Chinese market.Few seemed sensitive, however, to the dialectical tension of affirming at once that the COVID-19 pandemics was quite unavoidable on the whole while arguing elsewhere that it depended on contingent events that could, by definition, have happened otherwise.

| UNPREDICTABILITY, CONTINGENCY AND CATASTROPHE IN THE SOCIAL SCIENCES
Taken together, most writings about the COVID-19 pandemics seemed to draw the following links between unpredictability, contingency and catastrophe: (a) catastrophes do happen and that is a bad thing on the whole; (b) catastrophes result from an initial set of contingent events that triggered a chain reaction of further events leading to the overall catastrophic situation; and (c) unpredictability stands in the way of keeping contingent events in check through adequate policies.A frequent conclusion from (a), (b) and (c) taken together was that we needed more powerful mathematical models for prediction along with large datasets drawn from systematic objective observation of aggregate human behaviour. 1 It is of course hard to argue against point (a) whenever 'catastrophes' are defined as classes of events or configurations that are detrimental to human flourishing in general.In such cases, the connection between catastrophe and badness becomes unquestionable.Conversely, a different characterization of catastrophes as classes of events that are bad for certain specific purposes and sectional interests (rather than for generalized eudaimonia) would open the question of whether some catastrophes might be desirable or at least acceptable?As well as questions about which social groups are most/least affected by the catastrophe, and which ones are most/least asked to contribute to the social cost of avoiding or dampening the catastrophe's effects?Let us remember indeed that the French Revolution, the Abolition of Slavery and Hitler's demise were described as catastrophes by conservative commentators; and conversely, Naomi Klein (2007) foregrounded eloquently how disaster capitalism recasts catastrophes as commercial opportunities for Big Business.Although the question of defining the meaning of a "catastrophe" is primarily terminological, the ramifications are eminently political.
Beyond the terminological issues arising about the definition of catastrophes, the question (b) of the role of contingency deserves further discussion and raises specific ontological considerations.Indeed, two very different understandings of catastrophes are compatible with the idea that catastrophes result from an initial set of contingent events that triggered a chain reaction of further events leading to the overall catastrophic situation.From an actualist ontological perspective (that most professional forecasters seem to share) the catastrophe is an event that follows anterior events; and the link between events consists in statistical correlations.From this perspective, there is an equivalence between knowing, explaining and predicting (cf.Porpora, forthcoming) and the future is, in principle, as predictable as the past (see Morgan, forthcoming).But, as Maccarini remarks (forthcoming), the combination of this actualist ontology with the recognition of 'the intractable contingency of a volatile social and cultural world' (Maccarini Ibid.,p.1) leads to an "odd conclusion": because social science has proven over time its incapacity to formulate non-trivial quantitative predictions that stand the test of time, such predictive tasks should be entrusted to forecasters drawn from the natural sciences, even when the natural phenomena of interest (say the spread of a virus) are highly influenced by social, cultural and historical mechanisms.
Fortunately, there exist alternatives to the prevalent positivist ways of theorising social phenomena, including catastrophes.From a deep realist perspective, catastrophes entail successions of events but also the complex entities generating these events over time and, crucially, the context in which the catastrophe unfolded.To be ontologically precise, the "context" refers to relational configurations involving material, social, cultural and agentic entities.Doing so paves the way for accounts of social phenomena that interrogate the mechanisms behind the events, and that examine how material, social and cultural contexts influence (without fully determining) the possible forms and outcomes of human agency (Al-Amoudi & O'Mahoney, 2016).As Archer (forthcoming) healthily reminds us, it is not a single type, but several ontologically different types of mechanisms that account for macrosocial change, incl.catastrophic instances.The erosion of contexts supportive of established routines (Rescher, 1998) is arguably one source of explanation among others.But, against functionalist sociology, we should also consider the instability of those social and cultural contexts that are NOT characterized by logics of necessary complementarity (Archer, forthcoming).For instance, post-WWII Western societies displayed in many spheres of activity a logics of contingent complementarity that acted, according to Archer, as enabler and catalyst for unbound social morphogenesis (see Archer, 2017).
Moreover, in line with recent developments in realist relational sociology (Donati & Archer, 2015), we should also consider how contingency depends on the specific mode of relationality that connects agents together.As Donati suggests in the present Volume (Donati, forthcoming), the tendency of a catastrophe to spread is causally determined by the qualitative features of the social relations linking human agents.While certain types of (saturated) relations are conducive to containing the catastrophe, others are conducive to its exponential spread throughout the community.Of interest for the present discussion, Donati seems at least as much concerned with our communities' capacity to control locally the spread of the catastrophe than with our capacity to accurately predict its future evolution so as to control it through centralized policy.And this runs against much of the doxical talk that saturated public discussions during the COVID-19 pandemic.
Donati's discussion of the (causal) significance of relationality for catastrophic spread takes us to point (c) that links unpredictability with catastrophe and contingency.The idea that unpredictability stands in the way of keeping contingent events in check through adequate policies looks relatively consensual as long as we do not ask the difficult yet necessary follow up questions: how is successful prediction possible at all in an open system where multiple unobservable mechanisms are constantly interacting?How is successful prediction compatible with the assumption that people are free and capable of changing the context of future activities?Is it really predictions that we need to inform policies that can effectively keep catastrophes in check?
These questions have been, under different guises, discussed by and between realist social theorists over the past three decades or so (see for instance Lawson, 1985).While realists can agree that predictions formulated according to the still prevalent covering law model are ill founded, there still exists a healthy amount of disagreement between critical realist authors on how much we can reasonably predict, for what purposes and under which conditions?One convincing answer, suggested by Porpora (forthcoming) is that we can, and indeed must, make predictions tendentially, that is, fallible predictions about what might happen should nothing be done to alter the current course of action.But this apparently simple formula bears profound implications both for the epistemic status of predictions and for their political implications.Following Porpora, the refined epistemic status of predictions means that we should reconsider them as educated guesses that are more convincing than alternatives and that are principally informed by familiarity with known mechanisms and states of affairs.Note how this conception is different from the positivist view that depends on calculations primarily informed by mathematical wizardry, ever-increasing computational power and massive datasets about past events.But Porpora's discussion also adds a few suggestions about the desirable political usage of predictions.If predictions hold only as long as nothing is done to alter the course of events, then their usefulness for engineering social systems is limited by the fact that the prediction will start to break down the moment a consequential command is issued.The good news, however, is that realist predictions of the kind defended by Porpora can be politically useful as long as we renounce technocratic fantasies of command and control and rely instead on (tendential) predictions to imagine concrete utopias and dystopias for the sake of orienting collective action.
But the common belief that "unpredictability stands in the way of keeping contingent events in check through adequate policies" also entails great political naivety.In particular, the question we raised about who suffers most from the catastrophe and who contributes most to the bill (see point (a) above) resurfaces when we take a close look at the socio-cultural interactions (Archer, 1995) that steer social and economic policies.As Lazega argues through an important historical and sociological study of the Commercial Court of Paris (Lazega, forthcoming), a powerful group of corporate actors can instrumentalise contingency and unpredictability to take responsibility for protecting the system while accumulating a disproportionate share of unchecked power at the expense of other actors in the field.

| ONTOLOGICAL VS EPISTEMOLOGICAL UNPREDICTABILITY
As readers will appreciate, rather than adding to the (now forgotten) discussions that were on all lips and electronic screens when we decided to produce the present Forum Issue, we made the choice of stepping back and reflecting further and deeper on what we could add to the more fundamental discussion about our collective (in)capacity to predict future states of the world.Doing so was faithful to our broadly shared conception of philosophy as an underlabourer for social science, itself an underlabourer for emancipatory public policies.But also, as realists, we had a few fundamental insights that could perhaps dispel confusion in future discussions about unpredictability, catastrophe and contingency.While each of the collected papers makes a number of original and specific contributions, I would like to discuss a fundamental ontological insight they all share, and that might actually improve future discussions of the topic.This key idea can be formulated as the difference between epistemological and ontological unpredictability.
The idea is that, just like knowledge (Bhaskar, 2013), prediction has both a transitive (or epistemic) and an intransitive (or ontological) dimension, and the two should not be conflated.The epistemic dimension of unpredictability depends entirely on the limits of our knowledge.It can be illustrated with the casting of a rigged die: unknown to the observer, the die is rigged to almost always produce a six.Had the observer known that the die was rigged, they would have been better able to issue a correct prediction about the casting than in their current state of ignorance.In principle, it is possible to reduce epistemic unpredictability by improving our knowledge of the die and of the casting mechanism.Note how, if predictability were restricted to its epistemological dimension, we could assume a (closed, deterministic) world that is in principle entirely predictable but that humans fail to predict adequately just because they lack the relevant information to do so.And conversely, the idea of an entirely deterministic world only leaves enough room for a purely epistemic conception of prediction.From a purely deterministic perspective, the only source of unpredictability lies in our incomplete knowledge about relevant facts and there is only a single (epistemological) dimension to unpredictability.
To be clear, critical realists have no quarrel with the idea of an epistemic dimension of unpredictability, if anything because of their commitment to epistemological relativism (Bhaskar, 2009).Indeed, the latter allows realists to issue a few non-trivial considerations about epistemic unpredictability.Not only is epistemic unpredictability real and worthy of our consideration ("by improving our knowledge of X we might improve our capacity to predict its future activities"), but the epistemic dimension is also fundamentally inescapable.Attempts at eliminating entirely epistemic unpredictability are doomed to fail because our knowledge can never be known to be infallible, and also because whatever knowledge we hold is always historically produced from pre-existing cultural materials and in the context of power relations inherited from the past.Just as there is no single privileged asocial and ahistorical vantage point for knowledge, we shouldn't seek one for prediction.
But critical realist philosophy does more than qualify and nuance the epistemological dimension of unpredictability, it also points to an entirely different dimension of unpredictability that could be called ontological.Just as, from a critical realist standpoint, knowledge is structured by the combination of a transitive and an intransitive dimension, so is unpredictability.Since Bhaskar, critical realist philosophy has distinguished between the transitive or epistemic object of knowledge (references to X in the mind of an observer or in the discussions of an epistemic community) and the intransitive or ontological object of knowledge (X as a referent whose reality is distinct from the system of references through which it is known to an observer or community).Similarly, a realist approach acknowledges that unpredictability is not monodimensional and entails ontological sources of unpredictability in addition to epistemic ones.In other words, the world is not only unpredictable because of our ignorance of it; it is also unpredictable because its nature is such that it sets limits on prediction.Back to the real example of the predictive confusions that surrounded the COVID-19 vaccine, epistemic unpredictability had been magnified by factors that increased our ignorance of the subject matter.These factors included a general disinterest in formally recognized expertise, suspicions of political constraints weighing on scientific discourse and malicious campaigns of fake news.They also included, more insidiously, the conflation of the ontological dimension of predictability, thus leading to an obsession for the accumulation of metrics about people, viruses and societies, arguably at the cost of muffling qualitative studies of people's deep beliefs and modes of relationality.But unpredictability also had an ontological dimension because predictions were being attempted about the activities of entities that operated to some extent as open systems.For instance, to the precise extent that creating a new vaccine was an innovative activity, it was not possible to predict which (and how much) resources would be needed or even whether the mission was possible at all. 2 And to the extent that members of target populations were reflexive persons who were not entirely determined by external forces, it was not entirely possible to predict the rate at which they would opt for vaccination.
But processes of scientific discovery and of individual reflexivity were not the only sources of ontological unpredictability.Equally importantly, the emergent effects of discussions happening within communities were hard to predict accurately, and it was not possible to predict how population groups would react to such or such new information, or even how dominant narratives would evolve over time.As a result, most governments around the Planet adopted rhetorical stances of command and control while oscillating between the threat of police brutality, imploring people to act responsibly, nudging them to do so through lock-down regulations or simply providing up-to-date information while letting the pandemics spread.
But it would be a gross shortcut to infer that, because the world entails ontological unpredictability, then prediction should entirely be dispensed with.A more demanding, but also more fruitful approach, consists in asking why exactly does the nature of the world make it hard to predict, precisely so as to appreciate by the same token what aspects of its same nature can make it reasonably predictable for generating predictions that, for given purposes, can be considered more reliable than alternative estimates.And this is exactly the task that each of the papers in the present issue undertakes in its own way.
Jamie Morgan, for instance, goes back to fundamental metaphysical categories and interrogates the meaning ascribed to time and temporality.He fleshes out two parallel conceptions of temporality that can help illuminate debates on predictability.The first conception, which he refers to as "static theory" considers that 'the universe is spread out in four similar dimensions, which together make up a unified, four planar dimensional manifold, appropriately called spacetime.'(Morgan, forthcoming: pp.tbd) While this (meta)theory of time runs against many mundane observations, it is remarkably congruent with the implicit assumptions of the mainstream theory of relativity (Einstein, 1916) that continues to exert some prestige over those quantitative forecasters and applied statisticians who call for greater "scientificity" in social studies (Clarke & Primo, 2012).Under this static conception of time, there is perhaps room for epistemological unpredictability but not for ontological unpredictability because 'the significant claim is that all time has been created and exists -what we think of as the past, present and future.As a four-dimensional entity, the real me is located across all points in spacetime where I exist, and I extend not just in space but through time.' (Morgan, forthcoming) In a world were all time has already been created, "past" and "future" states of affairs are cast in ontological symmetry.The only differences between claims about the future and claims about the past can be epistemological but not ontological, it is the nature of our cognitive access to the world that is at stake, but not the (evolving) nature of a world whose yet indeterminate potentialities get actualized through time in one way at the exclusion of others.
In contradistinction, Morgan sketches a "dynamic theory of time" where 'the universe is spread out in the three dimensions of physical space, and time, like modality is a completely different kind of dimension from the spatial dimensions' (Morgan, forthcoming: p. tbd).Morgan suggests, and I agree with him, that CR ontology presents more affinities with a dynamic theory of time.Rather than thinking past, present and future as locations over a world-line, we should perhaps think of them as modalities of being.In Bhaskar's words (cited by Morgan) 'The reality of the past is that of the existentially caused and determinate (where caused means produced and it is the case that a thing can be determinate if not determinable); that of the present of the (indefinitely extendable) indeterminate moment of becoming; and that of the future, that of more or less shaped (conditioned, circumscribed, grounded) mode of possibility of becoming' (Bhaskar, 1993, p. 210, cited in Morgan, forthcoming: pp.tbc).Back to our discussion of unpredictability, we could say that while the past is uncontentiously subject to quite some epistemological unpredictability, a correct understanding of the future makes it, by definition, subject to both epistemological and ontological unpredictability.
As we hinted above, Porpora's contribution attempts to salvage a concept of prediction that is nonetheless 'strong enough to act on' (Porpora,forthcoming: 23).He does so in dialogue with extant critical realist critiques of predictive practices as they are unquestionably performed by strategic agents and scientists of various guises.Porpora's conception of prediction does acknowledge both epistemic and ontological unpredictability.But it also seeks to move beyond the hasty conclusion that the combination of ontological and epistemological unpredictability make the world fundamentally unpredictable.That conclusion is hasty because the world does not consist in a juxtaposition of entirely closed systems (eg.scientific labs) along with entirely open systems (eg. the city around the lab).A more realistic assumption is that of a continuum of more and less closed systems.A system can be said to be relatively more open when multiple mechanisms interact in complex ways, and it can be said to be relatively more closed the more 'a stable pattern derives from the dominant mechanisms we can identify.'(Porpora,forthcoming: 22).Ultimately, perfect closure is an ideal type where both ontological and epistemological unpredictability can be kept in check.But while perfect closure is neither obtainable nor desirable in the context of human societies, some degree of closure can sometimes be reasonably assumed and can serve as a basis both for fallible tendential predictions and for fallible collective actions.Thus, while it may not be possible to predict precisely which variants of the virus will spread, it is nevertheless reasonable to predict that the development of a vaccine will act tendentially to slow down the pandemics.Notice how the value of prediction lies less in its oraclelike capacity to tell an already-written future (in static time) and more in its capacity to inform and organize human actions that will eventually make the prediction false.In this sense, the so important and ambiguous ceteris paribus clause can usefully be replaced with a more politically-laden unless we do something about it clause.
The question of what we do, if anything, about the future is at the heart of Maccarini's contribution to the present volume.Following a brief review of sociological stances towards the future, Maccarini identifies a promising trend of hybridization between sociology's representational and performative functions.In other words, while it is already commonly accepted that the work of sociologists consists in representing current tendencies, Maccarini also adds that sociologists are particularly well placed, and have indeed started, to articulate imaginable futures (e.g. through scenario analysis) and, crucially, to influence (performatively) the production of future outcomes.Back to our discussion of ontological unpredictability, it might be worth highlighting that a purely epistemological concept of unpredictability makes sociology's performative efforts futile because of the implicitly deterministic worldview it implies.And conversely, it is because of real contingency and associated ontological unpredictability that sociologically informed policies stand a chance of making a positive difference.
The capacity to make a difference in spite of the social world's relatively high level of contingency, is a notoriously difficult question for social theorists (see Rescher, 1998).And yet, in her contribution to the present volume, Margaret Archer tackled this difficult problem head-on.It should perhaps be mentioned at this point that, most unfortunately, Archer's life ended before she had a chance to revise the manuscript for publication.But although the published paper is perhaps less focused and streamlined than the final paper would have been in different circumstances, Archer's paper contains so many important and original ideas that it is definitely worth reading in its current form.Reviewing Rescher's (1998) distinguished book on contingency, Archer remarks that the latter is all too commonly attributed to the inventivity of human agency, which leaves aside contingencies generated by macro-institutional logics such as the logics of contingent-complementarity that have arguably fuelled both the explosion of social and cultural diversity characteristic of Late Modernity (Archer, 2017) and the accelerated morphonecrosis of many a social form (Al-Amoudi & Latsis, 2015).Moreover, between the macro-sociological level of institutional logics and the individual level of human reflexivity, Archer points to a potentially fruitful topos for future research on contingency: the site of the rich and complex relationships linking relational subjects engaged in the production of relational goods.While routine action remains a (treacherously?)obvious mechanism for explaining social stability, there is little doubt that in contemporary contexts of contingent complementarity, other mechanisms such as reflexive relational steering will account for social stabilization.
The type of relations linking agents occupies a central place in Donati's article.Like most powerful ideas, his basic idea is very simple: the capacity of a community to halt the (exponential) epidemic spread of a catastrophe depends directly on a characteristic of agents' relationality.When agents are involved in saturated relationships of which they feel sufficiently satisfied, they do not feel a compulsion to create ever new relationships and are thus more amenable to the degree of social distancing needed to keep the rate of infection below the threshold of exponential diffusion.And conversely, the more agents are involved in unsaturated relationships, that are not sufficiently satisfactory to make social distancing a liveable reality, and the more difficult it is to obtain the appropriate level of social distancing.By asking the question of how we can act on the inflection point of catastrophic spread, Donati defends two important theses.Firstly, that while catastrophes can be acted on relationally thanks to the contingency of the social world, some social sites and moments are more propitious than others.In particular, inflection points correspond to the kairos, the moment when decisive action bears most consequences.Secondly, Donati's paper affirms against dominant technocratic views that civil society has a key role to play in situations of catastrophic spread.Beyond technocratic authoritarian punishments and nudges relative to social distancing, agents of civil society can also contribute to more resilient societies by encouraging the production of saturated rather than unsaturated social relationships.
Finally, Lazega's account is exemplary of how multiplex network analysis can be combined with a realist M/M framework (Archer, 1995) to generate deeply insightful analyses of how contingency, unpredictability and catastrophe can be instrumentalised by definite groups to consolidate their powers at the expense of other social actors.The resulting analysis allows him to delineate the following process of what could be called accumulation through catastrophization: In time T1 (initial social and cultural context): Some potentially occurring feature of social life is (hegemonically) recognized as contributing to a catastrophe for society.This was the case, for instance, with commercial bankruptcy that was viewed since the Middle Ages, not only as a behaviour that fragilizes the whole system of commercial exchanges but also as a capital crime and sin that warranted the heaviest economic, social and corporeal punishments.In time T2-T3 (socio-cultural elaboration): A group of business leaders (particularly industrial entrepreneurs and bankers) argued convincingly that the economic system had become so unpredictable because of globalization and deregulation, that bankruptcy could not be considered anymore as a moral failure from the entrepreneur.What was needed, therefore, was on one hand a system of regulations that mitigated the effects of unavoidable/unpredictable bankruptcies, and on the other hand, an organization (the Commercial Chamber of Paris) that would keep in check the potentially disastrous consequences of decriminalizing bankruptcy.In time T4 (new social and cultural context): The Commercial Chamber of Paris operates with almost uncontested legitimacy.The sub-group of bankers that controls it can also tamper the effects of bankruptcy on the banking system by selecting upstream (as bankers) which investments shall be funded and by deciding downstream (as commercial judges) how the assets shall be redistributed following a bankruptcy.As Lazega and others have noted, the financial system's rescue happened at the expense of other stakeholders, esp.workers and taxpayers.
Because of its conceptual reflexivity and clear methodology, Lazega's study has the potential to inspire many others on the emergence of (borderline) organizations that justify excessive prerogatives in the name of catastrophe.

| IMPLICATIONS FOR FUTURE RESEARCH AND POLICY
The elision of the ontological dimension of unpredictability has distorted our understanding of the future.This distortion is reflected in the ontological oscillations, tensions and contradictions between technocratic claims that the future is engineerable (frequent among policy-makers) and scientistic assumptions (frequent among professional forecasters) that the future is already defined in space-time (Morgan, forthcoming).Within the sphere of policy-making, it could be further remarked that the ontological oscillation above translates into a disconnection between, on one hand, rhetorics of control and confidence; and on the other hand, haphazard political decisions that kept shifting from providing up-to-date information while letting the pandemics spread to imposing constraining lock-downs, and from imploring people to act responsibly to threatening them with police brutality.But it is not sufficient to claim that we need to replace discredited positivist approaches with realist approaches; we also need non-trivial suggestions of how this can be achieved in future research in the social sciences.
Looking at this Volume's articles together, let us summarize a few key principles.
It should be fairly obvious that catastrophes should not be examined independently of their social, cultural and material context (Archer, forthcoming).Moreover, we should abandon functionalist assumptions of internal coherence or equilibrium regarding context (Archer, forthcoming; Lazega, forthcoming).Doing so allows us to ask the vexing questions of which social groups stand to lose most from the catastrophe?Which ones pay the highest cost for incurring the catastrophe?And which ones pay (instead) the cost of dampening or avoiding the catastrophe?
After reading a previous version of the present paper, a colleague asked me about 'the lumping … of catastrophes of different magnitudes without acknowledging that they are of different magnitudes.' (personal communication, 09/09/2023).I think his question is as important as it is complex.If anything it indicates that there remains considerable scope for additional realist theorizing about catastrophes, beyond the present Forum issue.That being said, critical realist social theory provides us, if not with a fully scripted answer, at least with elements for a satisfactory response.As a starter, it could be remarked that the concept of 'magnitude' bears different significations in the positivist and realist approaches.From a positivist perspective, magnitude refers exclusively to the aggregation of relevant quantities: numbers of dead and injured victims, destruction of assets expressed in a monetary currency; tons of oil cast into the sea, etc.But while aggregate quantities also matter from a realist perspective, there is more to 'magnitude'.In particular, it matters which emergent social totalities are destroyed by the catastrophe and which ones are less deeply damaged.While the logics of positivism is aggregative and is concerned with the addition of individual events, the logics of realism is totalizing and is concerned with the (re)production of open totalities (Lawson, 2023).To take an example from the present JTSB Forum issue papers, Donati's arguments about relationality and inflection points (forthcoming) remain valid whether we are talking about 1,000 or 100,000 or 100 million COVID cases spreading around the globe.Relatively independently of the aggregate number of cases, Donati's realist relational analysis tells us that we stand a better chance of containing the pandemics as long as we can preserve the totalities constituted by local networks of saturated relations.To take another example drawn from outside the Forum issue, Jared Diamond (2005) remarked in an influential book that societal collapses are never the result of a single factor but they always involve multiple factors including environmental damages, climate change, hostile neighbours, relations of commercial dependence; and the way each society responds to the above problems.Although Diamond's reflections are formulated in terms of factors rather than in terms of structural alterations of open social totalities, his findings can perhaps be (re)interpreted from a realist perspective.Thus we learn that, relatively independently of aggregate quantities, catastrophes fail to reach sufficient magnitude to produce full societal collapses whenever a single social sphere is destroyed or highly damaged while the others can still be relied on for collective activity and remedial action.
Moreover, a realist approach that takes ontological unpredictability seriously is likely to dedicate attention to ontologically heterogeneous sources of contingency.While agential creativity and disruptions of routine contexts are commonly considered, a thoroughly realist discussion of contingency also highlights other sources of contingency such as the predominant institutional logics (Archer, forthcoming) and relational networks involving relational subjects (Archer, forthcoming; Donati, forthcoming; Lazega, forthcoming).
Finally, by recognizing the irreducibility of ontological unpredictability to epistemological unpredictability, we also recognize that the role of social scientists cannot be purely descriptive or predictive, or even explanatory.Indeed, while intellectual speculation can in principle contribute to reducing epistemological unpredictability, its capacity to influence ontological unpredictability cannot be purely epistemic and has to involve performative processes (Macarini, forthcoming) through which social scientists collaborate with policy-makers.
In sum, we need and can reasonably achieve tendential predictions that are strong enough to act on (Porpora, forthcoming).These can be achieved by abandoning the fantasy of a flat ontology and of a universal methodology fit for every social context.Instead, we may formulate fallible predictions that are based on researchers' familiarity with people's deep beliefs, lived experiences and modes of relationality, in addition to extant knowledge about macro-social mechanisms and meso-level institutions.