Black Swans, New Nostradamuses, Voodoo decision theories, and the science of decision making in the face of severe uncertainty



The recent global financial crisis, natural disasters, and ongoing debate on global warming and climate change are a stark reminder of the huge challenges that severe uncertainty presents in decision and policy making. My objective in this paper is to look at some of the issues that need to be taken into account in the modeling and analysis of decision problems that are subject to severe uncertainty, paying special attention to some of the misconceptions that are being promulgated in this area. I also examine two diametrically opposed approaches to uncertainty. One, that emphasizes that the difficulties encountered in the modeling, analysis, and solution of decision problems in the face of severe uncertainty are in fact insurmountable, and another that claims to provide, against all odds, a reliable strategy for a successful handling of situations subject to severe uncertainty.

1. Introduction

Decision making under uncertainty is a central topic of interest in Operations Research (OR). This is manifested in the extensive OR literature on this subject, and perhaps most notably in the fact that it is included even in introductory textbooks to OR. As an illustration of the prominence of this topic in the discipline, consider the following quote from The Science of Better website (an INFORMS initiative):

What Operations Research is

In a nutshell, operations research (O.R.) is the discipline of applying advanced analytical methods to help make better decisions.

By using techniques such as mathematical modeling to analyze complex situations, operations research gives executives the power to make more effective decisions and build more productive systems based on:

  • More complete data
  • Consideration of all available options
  • Careful predictions of outcomes and estimates of risk
  • The latest decision tools and techniques

But it is important for OR specialists to remember that decision under uncertainty is a topic of fundamental importance in many other disciplines – engineering, economics, finance, ecology, conservation biology, etc. – where it is studied at various levels of depth and detail. Indeed, this article is a direct outcome of my encounter over the past 7 years with the perceptions and expositions of decision making under severe uncertainty by analysts and practitioners from the disciplines of statistics, applied mathematics, ecology, conservation biology, and environmental management.

An interesting by-product of this experience has been my exposure to an “external” view on OR, which afforded me a better understanding of the difficulties involved in the dialogue between OR and other disciplines.

Given the limits on space, I shall discuss here only what I consider to be the most fascinating part of my experience, namely my encounter with the use of non-probabilistic models for robust decision making in the face of severe uncertainty in these disciplines. But, to set the scene, consider this pre-OR commentary on the predictions of future events by Miguel de Cervantes Saavedra (1547–1616). The text in the square brackets is mine:

[Inn keeper:] “He also has with him a monkey with the rarest talent ever seen among monkeys or imagined among men, because if he's asked something, he pays attention to what he's asked then jumps onto his master's shoulders and goes up to his ear and tells him the answer to the question, and then Master Pedro says what it is; he has much to say about past things than about future ones, and even though he isn't right all the time, he is not wrong most of the time, so he makes us think he has the devil in his body.”

inline image

[Don Quixote:] “Senior Soothsayer, can your grace tell me che pesce pigliamo? What will become of us? …” [Senior Soothsayer:] “Senior, this animal does not respond or give information about things to come; about past things he knows a little, and about present ones, a little more.” “By god”, said Sancho, “I wouldn't pay anything to have somebody tell me what's already happened to me! Who knows that better than me? And it would be foolish to pay anybody to tell me what I already know; but since he knows about present things, here's my two reales so His Monkeyness can tell me what my wife, Teresa Panza, is doing now, and how she's spending her time.”

(Cervantes, 2003, p. 624)

This may well be the origin of the Monkey Business enterprise!

The paper is organized as follows:

  • Section 'Severe uncertainty'. Severe uncertainty: Brief discussions on “Black Swans” and “New Nostradamuses”, and a formulation of a simple model of severe uncertainty.
  • Section 'Black swans'. Decision model: Formulation of a simple model for decision problems subject to severe uncertainty.
  • Section 'Voodoo decision theory'. Voodoo decision theory: A discussion on Voodoo decision theory and its apparent “advantage” over conventional scientific theories.
  • Section 'Robustness against severe uncertainty'. Robustness against severe uncertainty: An overview of Wald's Maximin model, the conservatism of worst-case analysis, and a terse introduction to robust optimization.
  • Section 'Example'. Local robustness: Formulation of a radius of stability model, discussions on its invariance property, and the reason why it is unsuitable for the treatment of severe uncertainty.
  • Section 'The power of the (peer-reviewed) written word'. Case study: the campaign. Some points about my experience over the past 7 years in attempting to contain the spread of Voodoo decision making in Australia.
  • Section 'Conclusions: an OR perspective'. Conclusions: an OR perspective.

2. Severe uncertainty

In classical decision theory (Resnik, 1987; French, 1986), it is customary to distinguish among:

  • certainty,
  • risk,
  • uncertainty.

Knight 1921 is credited with the distinction between “risk” and “uncertainty”, his point being that under risk we know what is probable as well as the associated probabilities of the event(s) of interest, whereas under uncertainty we are ignorant of the probabilities associated with the event(s) of interest. What is more, we may not even know what is probable. For an enlightening depiction of uncertainty, consider this statement by Keynes (1937, pp. 213–214):

By “uncertain” knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

The aspiration to faithfully capture the nature of uncertainty and to differentiate between degrees thereof is reflected in the long list of terms that are habitually used to describe it: strict, hard, severe, fundamental, true, deep, and so on.

I call attention to the fact that in this discussion I focus on situations where we know that an event or events is/are probable, but we have no clue as to the likelihood of the probable event(s) in question. I shall refer to this kind of uncertainty as severe. The question is then: how are we to model, analyze, and solve decision problems that are subject to severe uncertainty understood in these terms?

But, before I address this question from an OR perspective, let us quickly examine two – much discussed – diametrically opposed views on the treatment of severe uncertainty.

2.1. Black swans

In his best-selling book The Black Swan: The Impact of the Highly Improbable, Nassim Taleb 2007 contends that true uncertainty is manifested in what he calls Black Swans. He defines a Black Swan as a highly improbable event with three characteristics:

  • It is totally unpredictable.
  • Its impact is massive.
  • It is amenable to explanation, after the fact, so that in retrospect it appears predictable, not random.

Taleb maintains that Black Swans have major impacts on the lives of individuals and the evolution of organizations; indeed, they shape the very course of history. However, because of their distinctive characteristics (as “rare events”), they are outside the purview of formal mathematical treatment. His criticism of methods and models that are the staple fare of the OR curriculum (e.g. classic portfolio analysis) has no doubt infuriated many OR specialists.

Taleb's thesis is that the reliance on such formal models has, over the years, inculcated a false sense of security in decision makers (DMs) in the financial sector to the great detriment of the global financial system. Thus, progress in this area will be achieved only if it is acknowledged that … we do not know how to deal with Black Swans. The following is a typical – extremely mildly put – statement by him:

Experts and “Empty Suits”

The inability to predict outliers implies the inability to predict the course of history, given the share of these events in the dynamics of events. But we act as though we are able to predict historical events, or, even worse, as if we are able to change the course of history. We produce 30-year projections of social security deficits and oil prices without realizing that we cannot even predict these for next summer—our cumulative prediction errors for political and economic events are so monstrous that every time I look at the empirical record I have to pinch myself to verify that I am not dreaming. What is surprising is not the magnitude of our forecast errors, but absence of awareness of it.

Taleb (2007, p. xx)

Taleb's recipe for living with Black Swans is to plan our activities so as to minimize their impact. He enumerates 10 principles for building systems that are robust to Black Swan Events. The short version is as follows:

  1. What is fragile should break early while it is still small. Nothing should ever become too big to fail.
  2. No socialization of losses and privatization of gains.
  3. People who were driving a school bus blindfolded (and crashed it) should never be given a new bus.
  4. Do not let someone making an “incentive” bonus manage a nuclear plant – or your financial risks.
  5. Counterbalance complexity with simplicity.
  6. Do not give children sticks of dynamite, even if they come with a warning.
  7. Only Ponzi schemes should depend on confidence. Governments should never need to “restore confidence.”
  8. Do not give an addict more drugs if he has withdrawal pains.
  9. Citizens should not depend on financial assets or fallible “expert” advice for their retirement.
  10. Make an omelette with the broken eggs.

The full version can be found at the web site of Taleb's (2005) book Fooled by Randomness. So the question is: can OR offer tools capable of coping with Black Swans?

2.2. New Nostradamuses

At the other end of the spectrum, we find scholars who seem to be unruffled by the challenges presented by severe uncertainty. Indeed, the confidence with which they pronounce on the outcome of (specific) future events gives the impression that the severity of the uncertainty – surrounding the situations about which their predictions are made – does not present a difficult problem. I shall briefly mention two examples.

According to the Associated Press (March 4, 2009):

… President Barack Obama will order martial law this year, the U.S. will split into six rump-states before 2011, and Russia and China will become the backbones of a new world order …

This prediction was made by Igor Panarin, Dean of the Russian Foreign Ministry diplomatic academy, a regular on Russia's state-controlled TV channels, a former spokesman for Russia's Federal Space Agency and reportedly an ex-KGB analyst. Regarding the scientific basis of these predictions (ibid.):

… Panarin didn't give many specifics on what underlies his analysis, mostly citing newspapers, magazines and other open sources. He also noted he had been predicting the demise of the world's wealthiest country for more than a decade now …

The second example has a definite OR flavor in that the predictions are claimed to be based on a well-known, well-established OR tool: Game Theory. On the website we read:

… Bruce Bueno de Mesquita is a political scientist, professor at New York University, and senior fellow at the Hoover Institution. He specializes in international relations, foreign policy, and nation building. He is also one of the authors of the selectorate theory.

He has founded a company, Mesquita & Roundell, that specializes in making political and foreign-policy forecasts using a computer model based on game theory and rational choice theory. He is also the director of New York University's Alexander Hamilton Center for Political Economy.

He was featured as the primary subject in the documentary on the History Channel in December 2008. The show, titled Next Nostradamus, details how the scientist is using computer algorithms to predict future world events …

In a recent book, Bueno de Mesquita 2009 explains how we can see and shape the future using the logic of brazen self-interest within a game theoretic framework. The following quote is taken from the transcript of a TED.TV lecture (February 2009) entitled Bruce Bueno de Mesquita predicts Iran's future:

OK, so I'd like you to take a little away from this. Everything is not predictable, the stock market is, at least for me, not predictable, but most complicated negotiations are predictable. Again, whether we're talking health policy, education, environment, energy, litigation, mergers, all of these are complicated problems that are predictable, that this sort of technology can be applied to. And the reason that being able to predict those things is important, is not just because you might run a hedge fund and make money off of it, but because if you can predict what people will do, you can engineer what they will do. And if you engineer what they do you can change the world, you can get a better result. I would like to leave you with one thought, which is for me, the dominant theme of this gathering, and is the dominant theme of this way of thinking about the world. When people say to you, “That's impossible,” you say back to them, “When you say “That's impossible,” you're confused with, “I don't know how to do it.” Thank you.

In short, Bueno de Mesquita's thesis is that using game theory he can predict the results of complicated conflicts to thereby engineer the future. A critique of Bueno de Mesquita's theories can be found on my website.

For the purposes of this discussion, it suffices to point out that Bueno de Mesquita does not disclose the details of the model that he deploys to predict the future so that we have no idea as to the specifics of how he incorporates severe uncertainty into his analysis. All we know is that his analysis is based on rational choice theory (Arrow, 1987) and expected utility theory (Von Neumann and Morgenstern, 1944), which, of course, exposes his theory to severe criticism based on the extensive theoretical and empirical research on theories of bounded rationality (Simon, 1984; Rubinstein, 1998; Kahneman, 2003).

2.3. A simple model of severe uncertainty

To prepare the ground for a quantitative analysis of decision problems subject to severe uncertainty, consider the following abstract uncertainty model consisting of the following three elements:

  • Uncertainty space, inline image.
    This is the set of possible/probable values of a parameter of interest, u. Given that the uncertainty is severe, this set can be vast.
  • u*inline image: the “true” value of u.
    As this value is subject to severe uncertainty, all we know about it is that it is an element of inline image.
  • inline image: a point estimate of u*.
    Given that the uncertainty is severe, we assume that inline image is a poor indication of u*, meaning that it is likely to be substantially wrong.

Note then that the severity of the uncertainty is manifested in three properties of this simple model:

  • The uncertainty space inline image can be vast (e.g. unbounded).
  • The point estimate inline image is of extremely poor quality (likely to be substantially wrong).
  • The model is devoid of any likelihood structure.

The third property implies that there is no reason to believe that u* is more/less likely to be in any one particular neighborhood of inline image. Specifically, there is no reason to believe that u* is more/less likely to be in the neighborhood of the point estimate inline image than in the neighborhood of any other point in inline image.

Let us now incorporate this uncertainty model into a simple, abstract decision model that will provide the framework for the discussions in the remaining sections of this article.

3. Decision model

In conjunction with the uncertainty model discussed above, consider the following generic decision model: a DM is required to select a decision dD that is “best” with respect to a given performance function f. Formally, D is a given set and f is a function on D × inline image with values in some set inline image. We refer to D as the decision space.

So formally, the decision problem under consideration can be stated as follows:

Decision problem:

Select the best dD with respect to the performance function f given the uncertainty model inline image.

Clearly, in its present formulation, the decision problem is not sufficiently defined. This is so because in the lack of preference criteria stipulating the ranking of decisions, it is impossible to determine which decision can be deemed “best.”

Suppose then that to get around this difficulty we consider tackling the problem through its parametric counterpart. Consider then these two (related) problems:

Problem PProblem P(u), uinline image
display math
display math

where g and h are real-valued functions on D and the familiar “; u” notation is used to indicate that in the framework of Problem P(u), the construct u represents a given parameter, not a decision variable. Let D* denote the set of optimal decisions to Problem P and let D*(u) denote the set of optimal decisions for Problem P(u), inline image.

Again, we have hit a snag because the (fundamental) difficulty associated with decision-making problems under uncertainty is that except for some degenerate cases, there is no inline image such that inline image. So, formulating the original decision problem in terms of the parametric problem, namely Problem P(u), does not take us very far as it has made no inroads into the ambiguity in the term “best” in the original problem statement.

Suppose then that we consider a modification in the decision problem such that it will address more directly the severe uncertainty in the true value of the parameter u. Consider then the following:

Decision problem under severe uncertainty:

Select a decision dD such that f(du) performs well over inline image given the point estimate inline image of the true value of u.

Again, the ambiguity has not been overcome because now we lack a precise definition for the criterion “well.”

Still, this formulation lends expression to an aspiration that may well be described as basic in this framework, namely that under severe uncertainty, we would most logically be motivated to identify decisions that are robust against the uncertainty in the true value of u. That is, it stands to reason that under these conditions we would seek decisions that perform well (relative to other available decisions) over the entire uncertainty space inline image under consideration.

But this takes us right back to where we started, namely to the difficulties in clinching a definition of a criterion that would enable us to distinguish between the quality of decisions. Because much as the severity of the uncertainty spurs us to find decisions that perform “well” – relative to other decisions – over the entire uncertainty space inline image, the difficulty is that generally under these conditions, no decision dominates all other decisions, over the entire uncertainty space.

The implication therefore seems to be that it is hard to envisage a way of formulating such a general criterion under these conditions. Devising a criterion that would determine how “well” a decision performs in the uncertainty space inline image would require a more carefully targeted analysis. Namely, it would have to be worked out on a case-by-case basis to reflect the specific decision-making situation in question. We discuss this issue in Section 'Robustness against severe uncertainty'.

4. Voodoo decision theory

In stark contrast to the above deliberations – which attest to the (well-nigh insurmountable) difficulties that one has to wrestle with in the modeling and analysis of decision problems that are subject to severe uncertainty – there are methods that approach the treatment of severe uncertainty as though it were “a walk in the park.” These methods are exponents of what I call Voodoo decision theory. Their basic approach can be summed up as follows: … ignore the severity of the uncertainty.

To be clear on what I mean by the phrase Voodoo decision theory, note that as in the case of Voodoo economics, Voodoo science, Voodoo Statistics, Voodoo Mathematics, and so on, the term “Voodoo” in Voodoo decision theory is intended to designate a theory that lacks sufficient evidence or proof, is based on utterly unrealistic and/or contradictory assumptions, spurious correlations, and so on. Skyrms (1996, p. 51) is credited with coining the phrase Voodoo decision theory:

The behavior of Kropotkin's cooperators is something like that of decision makers using the Jeffrey expected utility model in the Max and Moritz situation. Are ground squirrels and vampires using voodoo decision theory?

The point about a Voodoo theory is then that it is unconstrained by universally accepted scientific conventions such as: claims must be supported by facts, evidence, proofs, demonstrations, and so on. This means that in stark contrast to conventional scientific theories, Voodoo theories basically have a free hand to claim just about anything. In other words, in the framework of Voodoo theories, “anything goes.”

And to illustrate, consider how a conventional scientific theory evaluates how the quality of the input affects the quality of a model's output, as opposed to the evaluation given by a Voodoo theory, shown in Fig. 1.

Figure 1.

Conventional vs Voodoo models.

Translating this picture to the context of decision making under severe uncertainty, note that as I point out in Sniedovich 2010, the principles that are typically contravened by a Voodoo decision theory are summed up by the following two universally accepted maxims:

  • Garbage In – Garbage Out.
  • Results are only as good as the estimate on which they are based.

Thus, a decision theory based on the precepts of conventional science would take it for granted that the results generated by a model are only as good as the estimate(s) on which the model is based. Hence, if the estimate is a “wild guess” of the true value of the parameter of interest, the results generated by the model would, by necessity, be no more than “wild guesses.”

In contrast, a method propounding a Voodoo decision theory would have no qualms to declare to the world that it is “capable” of generating reliable results even in situations where the estimates on which the model is based are “wild guesses.”

The failure to recognize the obvious connection between the poor quality of the output generated by a model that is fed with poor-quality input is due to another factor as well. As indicated in Sniedovich (2007, 2010), and discussed in Section 'Example', this is also due to the more basic failure to recognize the vast differences between the results yielded by a local analysis as opposed to those obtained from a global analysis.

This is illustrated in Fig. 2 where a NASA image shows “… how the land surface temperature from January 25 to February 1 compared with the average mid-summer temperatures the continent experienced between 2000 and 2008. Places where temperatures were warmer than average are red, places experiencing near-normal temperatures are white, and places where temperatures were cooler than average are blue ….”

Figure 2.

Exceptional Australian heat wave, January 2009 (NASA image).

To appreciate the significance of this event, it is necessary to adopt a global view of the variations in temperature over the entire island (continent). The conclusions resulting from an examination of the area in the neighborhood of point A will be significantly different from those reached on grounds of a global analysis. And the same would be true if the analysis is confined to the neighborhood of point B or point C.

The situation is similar in the context of our Decision Problem Under Severe Uncertainty. Confining the analysis to the neighborhood of a given point in inline image, rather than evaluating the performance of decisions over the entire uncertainty space inline image, would yield unreliable results: a decision that performs well in the neighborhood of a given point in inline image may not perform well over inline image, and vice versa. But this established fundamental of sound decision making under severe uncertainty is openly violated by methods based on Voodoo decision theories.

Another point of distinction – as indicated in the above informal “definition” of Voodoo theories – between scientific and voodoo methods is that the latter take no notice of the Principium Contradictionis, which dictates that

  • A theory must steer clear of self-contradiction!

To illustrate, methods based on the precepts of conventional science would not even contemplate attributing conjointly these two properties to the same model:

  1. The uncertainty model is devoid of any likelihood structure. In particular, there is no reason to believe that the true value of u is more/less likely to be in the neighborhood of a point inline image than in the neighborhood of another point inline image.
  2. The values of u become increasingly unlikely as they diverge from point inline image.

But proposing such a glaring self-contradictory characterization of one and the same model seems to present no problem for a method based on a Voodoo decision theory.

The upshot of all this is that the recipe used by methods, advancing Voodoo decision theory, for the solution of our decision problem under severe uncertainty boils down to the following:

display math
  1. Pick a wild guess of the true value of the parameter of interest.
  2. Ignore the severity of the uncertainty, the vastness of the uncertainty space, and the poor quality of the wild guess.
  3. Conduct an analysis in the immediate neighborhood of this wild guess to seek a decision that is robust in this neighborhood.

And most interesting of all is that this recipe is being hailed as possessing considerable merit because – so the argument goes – among other things, it is singularly well suited to handle unbounded uncertainty spaces!

I hasten to add that the methods advancing Voodoo decision theory obviously do not claim, in so many words, that this is the type of recipe that they actually put forward for the treatment of severe uncertainty. Indeed, discovering that this is in fact the recipe that these methods prescribe is no easy task because of the huge incongruity between the rhetoric and the practice in these methods.

Thus, while the rhetoric describes the models that these methods propose as particularly tailored for the treatment of severe uncertainty, the truth is that in practice these models are in principle unable to accomplish this task. Consequently, unsuspecting analysts, especially those who are not conversant with decision theory, robust optimization, and so on, are led to believe that these methods indeed provide the right means for seeking decisions that are robust against severe uncertainty. I discuss this issue in the presentation How to recognize a voodoo decision theory (Sniedovich, 2009b).

In Section 'Example', I discuss some of the serious issues associated with this recipe. But before I can do this, I examine how OR-oriented theories address the question of robustness against severe uncertainty, paying special attention to the difficulties encountered in this endeavor.

5. Robustness against severe uncertainty

The most common approach to modeling robustness against severe uncertainty is worst-case analysis: hope for the best, plan for the worst! Or more poetically:

The gods to-day stand friendly, that we may,

Lovers of peace, lead on our days to age!

But, since the affairs of men rests still incertain,

Let's reason with the worst that may befall.

Julius Caesar, Act 5, Scene 1 William Shakespeare (1564–1616)

Thus, decision-making models exemplifying this approach pit the DM against uncertainty – represented by Nature – where the latter assumes the role of a hostile adversary. The point of this approach is that it effectively eliminates the uncertainty altogether because Nature's antagonistic attitude makes its response completely predictable. The end result is that the decision-making environment is transformed from being subject to “severe uncertainty” to being subject to “certainty.” The price tag attached to this convenience is, however, significant: policies based on “pure” worst-case analysis tend to be conservative – hence costly – due to the built-in over-protection.

It is not surprising, therefore, that over the years, a great deal of effort has gone into the development of models and methods that are worst-case oriented but at the same time seek to avoid the excessive conservatism of a “pure” worst-case analysis.

5.1. Maximin models

The classic worst-case approach to severe uncertainty is known in decision theory (Resnik, 1987; French, 1986) as Wald's Maximin (1939, 1945, 1950) paradigm. It can be stated as follows:

Rank decisions by their worst possible outcomes: select the decision the worst outcome of which is at least as good as the worst outcome of the other available decisions.

Thus, in the framework of the decision model considered in Section 'Decision model', the decision rule associated with the Maximin paradigm can be stated as follows:

Select a decision inline image such that the worst value of inline image over inline image is at least as good as the worst value of inline image over inline image for all other decisions inline image.

The underlying conceptual model is that of a game between two players: the decision maker and Nature. The DM controls decision inline image and Nature controls the state inline image. But in contrast to the classic two-person games in Game Theory, here the DM plays first so that Nature knows what decision was selected by the DM before she determines the state of the system.

The familiar textbook case is that where f is a real-valued function and the DM seeks to maximize the value of inline image – whereupon Nature aims to minimize this value. The mathematical transliteration of the Maximin decision rule is then as follows:

display math(1)

where MP is the mathematical programming. Note that if inline image consists of infinitely many elements, then the problem is a semi-infinite programming problem.

In cases where the performance function f represents constraints affected by u, the mathematical formulation of the worst-case paradigm would be

display math(2)

where inline image is a given subset of inline image. Alternatively, the equivalent classic Maximin format of the problem is as follows:

display math(3)

In many cases the set of feasible values of u associated with a decision inline image is only a subset of inline image, call it inline image. Therefore, it is convenient to adopt here the following formulation of the Maximin model:

display math(4)

observing that in this setup, ϕ can represent both the objective function and the constraints of an optimization problem. For example, for

display math(5)

we have

display math(6)
display math(7)

which is the robust-counterpart of the parametric problem

display math(8)

observing that both the objective function g and the constraint depend parametrically on u. Note that an optimal solution to the Maximin problem (6)–(7), say inline image, has the property that

display math(9)


display math(10)

The clause inline image in Equations (7) and (9) is a reminder of the worst-case orientation of the Maximin paradigm.

More details on the modeling aspects of Wald's Maximin model that are relevant to this discussion can be found in Sniedovich 2008a.

5.2. Conservatism of worst-case analysis

As indicated above, one of the defining features of models giving expression to situations that are classified as “subject to severe uncertainty” is that the uncertainty space inline image can be vast, even unbounded. This means that tracking down a policy that is driven by the worst-case concept, where the worst-case analysis is conducted over a vast uncertainty space, can yield highly conservative – hence costly – solutions. This characteristic has, over the years, gained the Maximin paradigm the reputation (or notoriety) of being excessively conservative (Resnik, 1987; French, 1986; Bertsimas and Sim, 2004).

But the point to note here is that a vast inline image need not automatically spell excessively conservative solutions. The fact is that there are cases where the Maximin model is devoid of conservatism, the reason for this being that in such cases, the term “worst case” does not signify an extreme event or a catastrophe. And what is more, the Maximin model is sufficiently flexible to accommodate situations where the conservatism of the worst-case approach can be mitigated by means of a (controlled) relaxation of the optimality and/or feasibility conditions. The next two examples illustrate these points.

5.2.1. Example

Consider the case where a robust solution (decision) is sought for the parametric constraint

display math(11)

where inline image is a given set, c is a function on inline image, and inline image is a subset of inline image for some inline image.

Ideally, we would seek a decision inline image such that inline image. However, in the absence of such a super-robust decision, it only makes sense that we should search for a decision that satisfies the constraint on a large subset of inline image. Consider then the following: define

display math(12)
display math(13)

where ρ is a real-valued function on the power set of inline image such that inline image and

display math(14)

and view inline image as the “size” of set inline image. Thus, by definition, inline image is the size of the largest subset of inline image on which decision d satisfies the constraint inline image at each point in this subset.

Clearly, it stands to reason that we would prefer a decision whose inline image value is large. The implication is then that the robustness issue boils down here to this:

display math(15)

More details on robustness models of this type can be found in Starr (1963, 1966), Schneller and Sphicas 1983, Eiselt and Langley 1990, Eiselt et al. 1998, Moffitt et al. 2008, and Sniedovich 2009a.

The point to note here is that there is no trace of conservatism in this definition of robustness against the severe uncertainty in the true value of u with respect to the constraint inline image. Indeed, one would be hard-pressed to detect the latent worst-case approach here, as on the face of it, no explicit reference is made here to a “worst case.”

To bring out the underlying worst-case approach in this case, observe that the definition of inline image in Equation (1) implies that inline image is the largest subset of inline image whose elements satisfy the constraint inline image. Hence,

display math(16)


display math(17)

and therefore

display math(18)

The point here is, as implied by the indicator function (17), that in the case of a constraint, the worst case amounts to a violation of the constraint. So, only two cases are possible:

  • Best case: the constraint is satisfied.
  • Worst case: the constraint is violated.

And the conclusion to be drawn from this argument is that the vastness of the uncertainty space inline image has no bearing whatsoever on determining the “degree” of the worst case. That is, the worst case does not become worse if we increase the size of inline image. Consequently, robustness to uncertainty with respect to a constraint avoids the conservatism that can characterize solutions to other types of problems. The only sense in which this worst-case approach can be seen as conservative is in its prescribing that a single violation of the constraint – at one point in inline image – renders the decision unacceptable. In Section 'Example', I examine another (commonly used) robustness measure that has this property.

The next example illustrates the notion of globalized robustness proposed by Ben-Tal et al. (2006b, 2009a) that moderates the conservatism of the worst-case approach to violation of constraints.

5.2.2. Example

Consider the parametric optimization problem

display math(19)

where c is a real-valued function on inline image. If the constraint inline image is “hard,” we may have to consider the following robust-counterpart version of the problem:

display math(20)

Now, suppose that this problem has no feasible solution, or the value of z* is unacceptably small, or the constraint inline image is “soft” with respect to u.

Ben-Tal et al. (2006b, 2009a) suggest that in such a case we consider replacing the constraint inline image by

display math(21)

where inline image is a control parameter, inline imageis a given subset of inline image, and inline image stipulates the distance – according to some metric – from u to inline image with the property that inline image. Here, inline image represents the “normal range” of value of u. Note that this means that any inline image that satisfies this constraint also satisfies the constraint inline image. Thus, these constraints insist that any admissible decision inline image must satisfy the constraint inline image over the “normal range” inline image.

Also note that (21) allows d to violate the constraint inline image for values of u in inline image provided that the violation at u does not exceed the critical value inline image. Finally, if inline image, then (21) is equivalent to inline image.

The globalized robust counterpart of (19) is then as follows:

display math(22)

which is the MP representation of the Maximin model whose classic format is as follows:

display math(23)

More details on globalized robustness models can be found in Ben-Tal et al. (2006b, 2009a, 2009b).

5.3. Robust optimization

The first stirrings of the field of robust optimization can be dated to the late 1960s (Gupta and Rosenhead, 1968; Rosenhead et al., 1972), but its roots go further back to the founding of classical decision theory (Wald, 1939, 1945, 1950; Savage, 1951, 1954; Hurwicz, 1951) in the 1950s. Indeed, Wald's Maximin model is one of the primary tools used in robust optimization for the treatment of severe uncertainty.

Since its introduction by Wald 1939, the Maximin model has been adopted to serve this purpose by several disciplines such as OR, control theory, statistics, and economics. However, until recently, its application has been seriously constrained by the difficulties involved in the solution of Maximin problems, especially those that are infinite-programming problems.

As indicated by Mulvey et al. 1995, Du and Pardalos 1995, Kouvelis and Yu 1997, Rustem and Howe 2002, and Ben-Tal et al. (2006a, 2009a), progress in the field of robust optimization has taken off, in the past 15 years, with the progress in these two interrelated areas:

  • Formulation of new robustness models.
  • Development of powerful algorithms for the solution of robust optimization problems.

The links between these two areas are important for the simple reason that more often than not robust optimization problems prove far more difficult to solve than their parametric counterparts. So, the “art of modeling” in robust optimization is manifested in the ability to formulate a parametric optimization model such that its associated robust optimization problem will be amenable to an efficient solution by available algorithms. A good example for this relationship is the development of the globalized robustness approach (Ben-Tal et al., 2006a, 2009a). For it is one thing to formulate a robustness model such as (22) but it is quite another to solve large-scale problems of this type.

6. Local robustness

There are many situations where robustness is sought against small perturbations in a nominal value of a parameter (data). For instance, as indicated by Ben-Tal and Nemirovski (2000, p. 416):

In real-world applications of Linear Programming one cannot ignore the possibility that a small uncertainty in the data (intrinsic for most real-world LP programs) can make the usual optimal solution of the problem completely meaningless from a practical viewpoint.


In applications of LP, there exists a real need of a technique capable of detecting cases when data uncertainty can heavily affect the quality of the nominal solution, and in these cases to generate a “reliable” solution, one which is immuned against uncertainty.

Indeed, one of the most popular robustness models in control theory, numerical analysis, parametric programming, and sensitivity analysis is a local robustness model known universally as the radius of stability model (Wilf, 1960; Milne and Reynolds, 1962; Hindrichsen and Pritchard, 1986; Paice and Wirth, 1998; Zlobec, 2009; Sniedovich, 2010).

It concerns a system inline image and a parameter inline image that indicates whether the system is stable or unstable. The radius of stability of such a system addresses the following practical question:

What is the radius of the largest ball centered at a given nominal point inline image such that the system is stable for all values of θ in this ball?

Let inline image denote a ball of radius α centered at inline image and let inline image denote the subset of inline image consisting of all the points at which system s is stable. For instance, assume that

display math(24)

for some set inline image, where inline image represents a stability constraint: system s is stable at inline image iff inline image.

The radius of stability of system s at the nominal point inline image would be defined as follows:

Radius of stability model:

display math(25)
display math(26)
display math(27)

This is illustrated in Fig. 3.

Figure 3.

Radius of stability.

However surprising this may appear, there seems to be no reference in the literature to the fact that this is a typical Maximin model, which is expressed in terms of the MP format. To wit, its classic format is as follows:

display math(28)


display math(29)

observing that in this context α plays the role of a decision variable and the ball inline image plays the role of the uncertainty space of decision α.

The inference to be drawn then is that Wald's Maximin model dominates the scene not only in the area of global robustness but also in the area of local robustness (Sniedovich, 2010).

Let us go back to the issue of robustness against severe uncertainty.

OR scholars and analysts would, in all likelihood, find it utterly incomprehensible that a local robustness model, such as the radius of stability model – which was designed expressly to model robustness against small perturbations in a nominal value of the parameter of interest – is used to model decision-making problems that are subject to severe uncertainty, whose uncertainty space is vast and the nominal value of the parameter is a poor indication of the true value of the parameter so that it is likely to be substantially wrong. And yet, this is the reality in a number of disciplines, for instance: conservation biology, applied ecology, and environmental management (see discussion in Sniedovich, 2008b and the references in Moilanen et al., 2006; Rout et al., 2009; Yemshanov et al., 2010).

To make vivid to proponents of this approach the profound error in using a local robustness model such as the radius of stability model to determine the robustness of decisions against severe uncertainty, I devised a theorem that I call Invariance Theorem (Sniedovich, 2007, 2010). In the case of the radius of stability model, it can be stated as follows:

6.1. Invariance theorem

Let α° be any real number such that inline image. Then for each inline image, the value of inline image is invariant with inline image for all inline image such that inline image.

Figure 4 illustrates this theorem. The term No Man's Land is intended to make vivid the fact that results generated by the radius of stability model take no account whatsoever of the performance of decisions outside the ball inline image.

Figure 4.

No Man's Land Syndrome of the radius of stability model.

The point, of course, is that under conditions of severe uncertainty, inline image can be a small subset of the uncertainty space inline image. However, since this aspect is not made sufficiently clear in Fig. 4, I had to supplement it with another picture that comes closer to illustrating the relation between the size of inline image and that of inline image. This is shown in Fig. 5.

Figure 5.

The No Man's Land Syndrome of local robustness analyses.

So, how valid can results yielded by an analysis in the immediate neighborhood of a poor estimate be if the uncertainty space is vast?

6.2. Example

The objective of this example is to illustrate the obvious consequences of using a local robustness model, such as the radius of stability model, to determine the global robustness of decisions against severe uncertainty over the uncertainty space under consideration.

Consider then Fig. 6. It shows the performance of two systems s′ and s″ over the uncertainty space inline image. The performance requirement is inline image, so that formally we can set inline image. The estimate of the true value of θ is inline image.

Figure 6.

Radius of stability model.

The radius of stability of s′ is equal to α′=6 and that of s″ is equal to α″=5. Hence, according to the local radius of stability model, in the neighborhood of inline image, system s′ is more robust than system s″.

Whatever one's reservations about this judgment, the fact remains that the local robustness that it yields does not apply to the performance of the systems over the entire uncertainty space inline image. Clearly, system s″ performs considerably better than system s′ over the entire vast uncertainty space, except for the small interval [3.9306, 6.4027]. Consequently, system s″ is far more robust than s′ with respect to the performance requirement inline image over inline image.

6.3. Info-gap robustness

A simple version of the radius of stability model that is particularly relevant to our discussion is characterized by a stability requirement of the form inline image, where c is a real-valued function on inline image and c* is a given numeric constant. The problem featured in Example 2 illustrates such a model.

This model, called info-gap robustness model, was developed by Ben-Haim 2006 as a framework for the modeling, analysis, and solution of decision problems subject to severe uncertainty, evidently without realizing that this model is a radius of stability model and that its local orientation renders it unsuitable for the treatment of severe uncertainty (Sniedovich, 2010). Perhaps even more significant is the fact that the theory proposing this model is claimed to be radically different from all current theories for decision under uncertainty (Sniedovich, 2007, 2010), and furthermore that this info-gap robustness model is not a Maximin model (Sniedovich, 2008a, 2010).

Models of this type are being used widely and uncritically as general frameworks for the modeling, analysis, and solution of decision problems that are subject to severe uncertainty where the uncertainty space is vast and the nominal value is a poor estimate of the true value of the parameter of interest (see Moilanen et al., 2006; Burgman et al., 2008; Rout et al., 2009; Yemshanov et al., 2010, and the references therein). A case study illustrating the use of such models is discussed in the next section.

7. Case study: the campaign

As the ongoing debates on climate change and the global financial crisis demonstrate, and the recent natural disasters amplify, incorporating severe uncertainty into the analysis of large-scale systems is a formidable task. This fact is duly reflected in the following statements, but the methods offered to tackle this task are another matter altogether!

The first statement is taken from a paper posted on the website of the Flood Risk Network project (UK):

Making Responsible Decisions (When it Seems that You Can't)

Engineering Design and Strategic Planning Under Severe Uncertainty

What happens when the uncertainties facing a decision maker are so severe that the assumptions in conventional methods based on probabilistic decision analysis are untenable? Jim Hall and Yakov Ben-Haim describe how the challenges of really severe uncertainties in domains as diverse as climate change, protection against terrorism and financial markets are stimulating the development of quantified theories of robust decision making.

Hall and Ben-Haim (2007, p. 1)

The second is taken from a paper published in the journal Ecological Modelling:

In summary, we recommend info-gap uncertainty analysis as a standard practice in computational reserve planning. The need for robust reserve plans may change the way biological data are interpreted. It also may change the way reserve selection results are evaluated, interpreted and communicated. Information-gap decision theory provides a standardized methodological framework in which implementing reserve selection uncertainty analyses is relatively straightforward. We believe that alternative planning methods that consider robustness to model and data error should be preferred whenever models are claimed to be based on uncertain data, which is probably the case with nearly all data sets used in reserve planning.

Moilanen et al. (2006, p. 123)

The third is taken from a paper published in the journal Journal of Applied Ecology:

Info-gap also allows decision makers to view the trade-off between minimum acceptable performance and the robustness of decisions. Uncertainty is pervasive in decision-making in ecology and conservation biology, so addressing it explicitly helps find robust decisions that avoid catastrophic outcomes.

Rout et al. (2009, p. 786)

The fourth is taken from a paper published in the journal Risk Analysis:

In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads to risk-ignorant decisions and miscalculation of expected impacts as well as the costs required to minimize these impacts. Here we use the information gap concept to evaluate the robustness of risk maps to uncertainties in key assumptions about an invading organism.

Yemshanov et al. (2010, p. 261)

The common denominator of these articles is that they propose an info-gap robustness model as the panacea against severe uncertainty (see additional references in Moilanen et al., 2006; Rout et al., 2009; Yemshanov et al., 2010).

This info-gap robustness model is hailed as a model that puts at our disposal a singularly suitable means for obtaining robustness against severe uncertainty, especially in cases where the uncertainty space is unbounded. Similar radius of stability models have been proposed for the treatment of various problems in applied ecology, conservation biology, economics and finance, homeland security, healthcare and medicine, and engineering (see Sniedovich, 2008b, 2010).

This – as can be gathered from the discussion thus far – has ultimately led to my labeling these models as exponents of voodoo decision making (Sniedovich, 2010).

In a similar vein, Ben-Tal et al. (2009b, p. 926) use the phrase “irresponsible” DM to designate the use of a local model of robustness – one that conducts the robustness analysis only on a subset of the uncertainty space that represents the “normal range” of values of the parameter of interest – which ignores the performance of decisions outside this “normal range.” So, the question naturally arising is: On what grounds can one possibly justify the use of a local robustness model—such as the radius of stability model—to seek robustness against uncertainty in situations where the uncertainty is severe and the nominal value represents a poor estimate of the true value of the parameter of interest?

On numerous occasions over the past 7 years, I pointed out this and similar issues to users of such models. In particular, I pointed out that – contrary to their repeated claims to having devised a new method for robust decision under severe uncertainty – the model that they have been using is in fact a simple instance of Wald's famous Maximin model. But what is more, that in their implementation of this model, it has an inherent local orientation that renders it utterly unsuitable for the treatment of severe uncertainty.

7.1. False attractions

One of the reasons why local models of robustness, such as the radius of stability model, have struck a chord with many scholars/analysts seeking tools for robustness against severe uncertainty is that they seem to be thoroughly oblivious to the untenable inner contradiction in using such models for the treatment of severe uncertainty. Indeed, my experience has shown that many scholars/analysts accept the face value characterization of such models as non-probabilistic and likelihood-free as a prompt to jump to the conclusion that these models provide the right framework for robust decision making under severe uncertainty.

The following quotes may explain how such scholars/analysts read into radius of stability models, such as info-gap's robustness model, the wrong capabilities and thus the ability to seek robustness against severe uncertainty (emphasis is mine):

Information-gap (henceforth termed “info-gap”) theory was invented to assist decision-making when there are substantial knowledge gaps and when probabilistic models of uncertainty are unreliable (Ben-Haim, 2006). In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing.

Burgman et al. (2008, p. 8)

However, if they are uncertain about this model and wish to minimize the chance of unacceptably large costs, they can calculate the robust-optimal number of surveys with equation (5).

Rout et al. (2009, p. 785)

What should be noted here is that although the uncertainty models under consideration are clearly held to be non-probabilistic and likelihood-free, the robustness that they yield is misrepresented by means of terms such as “chance” and “most likely.” In this frame of mind, the most robust decision is (mis)interpreted as the decision that maximizes the likelihood that the performance requirement is satisficed.

Similarly, my experience has been that in spite of the fact that radius of stability models are clearly non-probabilistic and likelihood-free, the use of terms such as “estimate” and “best guess” to describe the nominal value of the parameter often leads to a misinterpretation of the likelihood-free structure of the model:

Model of severe uncertainty
Formal assumptions(mis) Interpretation of the model
•The uncertainty model is non-probabilistic and likelihood-free•The estimate inline image is the most likely value of u
•The uncertainty space is vast•Values of u become increasingly unlikely as they diverge from inline image
•The estimate is poor and likely to be substantially wrong•The true value of u is most likely to be in the neighborhood of inline image

This explains of course why the confusion between local robustness and global robustness is so endemic in this literature (see Section (20)).

The long and the short of it is that, since I was dissatisfied with the response to my critique, at the end of 2006, I launched a campaign to contain the spread of such models in Australia. I should point out, though, that although my campaign has primarily focused on the use of such models in Australia, I have also been in touch with analysts, mostly academics, in other countries, including Canada, Germany, Finland, France, Israel, Japan, the Netherlands, New Zealand, Norway, Sweden, United Kingdom, and United States. I also contacted editors of journals that published refereed papers advocating the use of local radius of stability type models for the treatment of severe uncertainty. And, of course, I gave numerous seminars/lectures/presentations, and I published a number of articles, on this topic. Information about this campaign can be found at

7.2. The power of the (peer-reviewed) written word

One of the issues that I faced early on in the campaign was conveying to info-gap users the full extent and seriousness of the flaws and errors in peer-reviewed articles advocating the use of local radius of stability type robustness models for the treatment of severe uncertainty.

Rather than address my concrete criticism regarding specific technical (mathematical) details, proponents of such models simply argued as follows: “how can a theory based on such models be so flawed – as you claim – when numerous articles advocating its use have been accepted by and published in peer-reviewed journals?”

Such is the power of the peer-reviewed word!

Of course, although this argument would not count as “proof” that the criticism is invalid, in practice it can have significant persuasive power, especially for those who are not conversant with the subject concerned. Regrettably, this argument can be used very effectively by professionals who should know better. It is indeed unfortunate that despite Galileo Galilei's (1564–1642) famous dictum

in matters of science, the authority of thousands is not worth the humble reasoning of one single person

the practice of “validating” specific technical assertions by a majority vote rather than a formal, rigorous, technical analysis of the issues in question persists in academia.

To deal with this diversionary argument, I compiled a collection of reviews of publications, including peer-reviewed articles, where I explain in detail the flaws (technical and conceptual) afflicting the methods proposed in these publications. At present, the collection consists of 15 reviews and can be found at the website of the project.

7.3. Progress

As might have been expected, progress on this front has been slow. However, there are signs that my criticism is beginning to register. Two examples should suffice.

7.3.1. Example

In Hall and Harvey (2009, p. 2), we find the following cryptic statement (emphasis is mine):

An assumption remains that values of u become increasingly unlikely as they diverge from inline image.

where inline image denotes the center of the balls of the radius of stability (info-gap) robustness model.

This apparently is a response to my criticism of the local orientation of the info-gap robustness model and its consequent in-principle inability to deal with severe uncertainty. The objective of this assumption, which was appended in an ad hoc manner to the assumed likelihood-free local robustness model, was to meet both the demands of severe uncertainty and to justify the use of the local robustness analysis (conducted in the neighborhood of inline image) by the info-gap model. But, alas, info-gap decision theory is a non-probabilistic and likelihood-free methodology (emphasis is mine):

In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.

Ben-Haim (2006, p. 18)

So, consequently, this glaring contradiction between Hall and Harvey's (2009) “corrective” assumption and the inherent likelihood-free nature of the robustness model, and the assumed severity of the uncertainty under consideration, was picked up in the 2009 report commissioned by the UK Department for Environment Food and Rural Affairs (DEFRA):

More recently, Info-Gap approaches that purport to be non-probabilistic in nature developed by Ben-Haim 2006 have been applied to flood risk management by Hall and Harvey 2009. Sniedovich 2007 is critical of such approaches as they adopt a single description of the future and assume alternative futures become increasingly unlikely as they diverge from this initial description. The method therefore assumes that the most likely future system state is known a priori. Given that the system state is subject to severe uncertainty, an approach that relies on this assumption as its basis appears paradoxical, and this is strongly questioned by Sniedovich 2007.

Bramley et al. (2009, p. 75)

7.3.2. Example

After several years of – sometimes heated – discussions on the incongruity between local robustness and severe uncertainty, the message seems to finally get across:

Although info-gap theory is relevant for many management problems, two components must be carefully selected: the nominal estimate of the uncertain parameter, and the model of uncertainty in that parameter. If the nominal estimate is radically different from the unknown true parameter value, then the horizon of uncertainty around the nominal estimate may not encompass the true value, even at low performance requirements. Thus, the method challenges us to question our belief in the nominal estimate, so that we evaluate whether differences within the horizon of uncertainty are “plausible.” Our uncertainty should not be so severe that a reasonable nominal estimate cannot be selected.

Rout et al. (2009, p. 785)

This is a small, but important step forward!

The reader is advised that the most fascinating aspects of the campaign cannot be discussed in this article. Some are discussed on the website of the project.

8. Conclusions: an OR perspective

Having come full circle, I end this discussion with a few concluding remarks about the main topics that I covered in this article.

  • Black Swans
    Taleb's solution to the Black Swan phenomenon is to build the world so that it is resistant to Black Swans, hence, to forecast errors, in which case forecast errors, hence uncertainty, would become inconsequential. This, no doubt, is an extremely laudable aspiration! But it is certainly easier said than done, let alone extremely conservative, hence extremely costly. We shall have to wait for Taleb's new book on this topic to find out to what extent is this approach practical, if at all.
    It seems that, in the world of OR, we shall have to continue, for a long time to come, to operate mostly – in fact exclusively – on the working assumption that the universe of probable events is known. Furthermore, I suspect that the worst-case approach captured by Wald's Maximin model and its many variants will continue to dominate the scene both in local and global robustness.
  • New Nostradamuses
    I imagine that most OR specialists would be more than gratified to learn that some OR methods and techniques turned out, in practice, to be effective tools for predicting the behavior of large-scale systems that are subject to severe uncertainty. But the essential requirement for showing this to be the case would have to be, as is the practice in other scientific disciplines, verification. Such methods would have to be put to public tests so as to enable reproduction of the results.
    So, when it comes to expressing an opinion on the success of Game Theory in the hands of Bueno de Mesquita, all I can say is that there is really nothing to go by because the details are unavailable. Other than that … I am more than a bit skeptical about the quoted “90%” success rate of the predictions.
    The reader may wish to consult Green 2005 for a discussion on the performance of game theory-based forecasting models.
  • Voodoo decision making
    Based on my extensive experience over the past 7 years, I am confident that OR, as a discipline, is perfectly placed to expose Voodoo decision theories for what they are. Because such theories usually originate outside OR, it is extremely important that OR specialists keep abreast with the research and practical work carried out in other disciplines so as to monitor how OR tools are translated into and applied in other disciplines.
  • Opportunities and challenges
    In a word then, the science of decision making in the face of severe uncertainty poses tremendous challenges but at the same time offers great opportunities to OR specialists and to OR as a discipline. But this is not unique to OR!


  1. 1See

  2. 2

  3. 3,2933,504384,00.html

  4. 4

  5. 5

  6. 6

  7. 7

  8. 8See website at

  9. 9See

  10. 10