A paradox for tiny probabilities and enormous values

We begin by showing that every theory of the value of uncertain prospects must have one of three unpalatable properties. Reckless theories recommend giving up a sure thing, no matter how good, for an arbitrarily tiny chance of enormous gain; timid theories permit passing up an arbitrarily large potential gain to prevent a tiny increase in risk; non-transitive theories deny the principle that, if 𝐴 is better than 𝐵 and 𝐵 is better than 𝐶 , then 𝐴 must be better than 𝐶 . Having set up this trilemma, we study its horns. Non-transitivity has been much discussed; we focus on drawing out the costs and benefits of reckless-ness and timidity when it comes to axiology, decision theory, and normative uncertainty.

On your deathbed, God brings good news.Although, as you already knew, there's no afterlife in store, he'll give you a ticket that can be handed to the reaper, good for an additional year of happy life on Earth.As you celebrate, the devil appears and asks, 'Won't you accept a small risk to get something vastly better?Trade that ticket for this one: it's good for 10 years of happy life, with probability 0.999.'You accept, and the devil hands you a new ticket.But then the devil asks again, 'Won't you accept a small risk to get something vastly better?Trade that ticket for this one: it is good for 100 years of happy life-10 times as long-with probability 0.999 2 -just 0.1% lower.'An hour later, you've made 50,000 trades.(The devil is a fast talker.)You find yourself with a ticket for 10 50,000 years of happy life that only works with probability 0.999 50,000 , less than one chance in 10 21 .Predictably, you die that very night.
Here are the deals you could have had along the way: On the one hand, each deal seems better than the one before.Accepting each deal immensely increases the payoff that's on the table (increasing the number of happy years by a factor of 10) while decreasing its probability by a mere 0.1%.It seems unreasonably timid to reject such a deal.On the other hand, it seems unreasonably reckless to take all of the deals-that would mean trading the certainty of a really valuable payoff for all but certainly no payoff at all.So even though it seems each deal is better than the one before, it does not seem that the last deal is better than the first. 1 In this paper, we develop a general version of this paradox and then explore different ways of resolving it.In short, every theory of the value of uncertain prospects must be timid, reckless, or non-transitive.Timid theories permit passing up an arbitrarily large increase in the size of a payoff to prevent a tiny decrease in its probability.Reckless theories recommend sacrificing a sure thing, no matter how good, for an arbitrarily tiny chance of enormous gain.2And non-transitive theories deny the principle that, if  is better than  and  is better than , then  must be better than .
We'll set out this general trilemma more formally in section 1.We think that timidity and recklessness are both intuitively implausible for many types of goods.Not simply resting on intuition, the remainder of the paper will draw out the troubling consequences of each.In contrast, we will not consider in depth the possibility of rejecting transitivity; for a comprehensive discussion of it, we defer to Temkin (2012) and the large associated literature.The arguments in this paper could, then, be seen as supporting non-transitivity, although we are not inclined toward that conclusion.
In section 2, we explain how the choice between timidity and recklessness fits in with some well-known approaches to the evaluation of prospects.The discussion includes expected value theory, risk-weighted expected utility theory, and a cleaned-up version of the 'Nicolausian discounting' view advocated by Smith (2014) and Monton (2019).For example, we note that the best-supported versions of utilitarianism, which specifically involve expected value theory, will lead to recklessness with respect to the creation of good lives.In general, our analysis of timidity and recklessness will be relevant to identifying the costs and benefits of many axiological theories.
In section 3, we argue that timid approaches have implications that will be implausible for many kinds of goods.For example, timidity means that events that happened in remote regions of space and time, beyond our influence, can have a dramatic effect on which actions are best in prospect.Moreover, timidity tends to require implausibly extreme risk aversion, and even timid views will tend to recommend certain kinds of long-shot bets, with counterintuitive results.
1 Arguments with a similar structure are often called spectrum or continuum arguments-see Temkin (2012) for a survey.
The best-known arguments involve tradeoffs between the quantity and the quality of a payoff (e.g. the duration of a life and its typical wellbeing at a time) rather than, as here, the quantity and the probability.Our argument has the useful feature that probability has an extremely well-established quantitative structure, with no natural threshold between 'high' and 'low' probability, whereas the structure of well-being (say) is more mysterious.Temkin discusses a probability-based spectrum argument in his chapter 8, but it works quite differently from ours.
In sections 4 and 5, we describe some problems that arise from recklessness when one allows for unbounded or infinite payoffs.Versions of these problems will be familiar to many readers in the context of expected utility theory, where they are associated with the St Petersburg gamble and Pascal's wager, respectively.What's new here is that similar problems arise very generally from recklessness, regardless of whether one accepts expected utility theory.So these problems are more general and-given that the alternative is timidity-more pressing than usually thought.We also want to emphasize the importance of these problems in moral contexts, whereas discussions of St Petersburg and Pascal almost always focus on prudential value or mere preference.
Section 4 shows, more specifically, that recklessness conflicts with some very plausible dominance principles, while in section 5 we argue that reckless theories are infinity-obsessed: they evaluate an arbitrarily tiny probability of an infinite payoff above any guaranteed finite payoff.Although, for reasons we will explain, it is not entirely clear how much difference this makes in ordinary situations, infinity obsession would lead to very strange decisions in some possible circumstances, and would greatly alter the grounds that we invoke to justify claims about whether one prospect is better than another.
In section 6, we show that the mere possibility of recklessness poses a problem for creating an acceptable theory of normative or evaluative uncertainty.How should your preferences reflect your uncertainty about which theory of value is correct?Many philosophers have argued in favor of an approach to this and similar questions that relies on expected utility theory.We argue that, under such an approach, agents who have any credence at all in recklessness must themselves be reckless.(This is related to, but in some ways goes beyond, standard worries about extreme theories 'swamping' more moderate ones.)Although we will mention some approaches to normative uncertainty that avoid this problem, there is a conundrum here for philosophers broadly sympathetic to the use of expected utility theory in that context.
Section 7 sums up.

THE GENERAL TRILEMMA
We'll now state more formally the trilemma between timidity, recklessness, and non-transitivity.
In doing so, we'll generalize the initial example in two main ways.First, instead of considering extra years of happy life, we can consider other types of goods.Second, the specific numbers used in the example don't matter.The devil ramps up the size of the payoff on the table, while slowly decreasing its probability from certainty down to almost zero.As long as there's some way for him to do this that makes each trade look attractive, the paradox remains.
To do this properly, let's fix some terminology.By a prospect we mean a situation in which different outcomes can arise with different probabilities. 3We will only consider prospects in which the possible outcomes are adequately described in terms of a payoff: the quantifiable, and in principle arbitrarily large, gain or loss of some type of good relative to a fixed baseline.We will, in fact, mainly consider prospects that can be described as a matter of 'getting payoff  with probability '-the implication being that one gets nothing (i.e. the baseline outcome) with the remaining probability. 4Different theories of value will, of course, care about different types of goods.In a prudential context, a payoff might be some number of years of happy life, but we will be especially interested in a moral context, in which a typical payoff might be some number of people benefited in a certain way, or else some number of good lives brought into existence, or perhaps the flourishing of human civilization through some span of time.At any rate, the first two horns of the trilemma are formulated with respect to a given type of payoff and a given baseline.
The first horn, timidity, gives surprising importance to arbitrarily small changes in probability.It says that sometimes a slight decrease in the probability of a payoff cannot be outweighed by any increase in the payoff's size.Moreover, it says, this is true no matter how strictly we interpret 'a slight decrease'.To make this precise, let's say that a standard of closeness specifies when two positive numbers count as close together-in which case a decrease from the larger to the smaller would count as slight.5Timidity: By any possible standard of closeness, there's a finite payoff , and closetogether, positive probabilities  > , such that for every finite payoff , no matter how high, getting  with the slightly higher probability  is no worse than getting  with the slightly lower probability .
For example, by one standard, a 0.1% decrease always counts as slight.So a theory that is timid with respect to future years of happy life must say that, for some , , and , living  years with probability  is no worse than living an arbitrarily large number of years with probability , even though  is only 0.1% smaller than .
The second horn of the trilemma, recklessness, gives surprising importance to long-shot bets: Recklessness: For any finite payoff , no matter how good, and any positive probability , no matter how tiny, there's a finite payoff , such that getting  with probability  is better than getting  for sure.
So, a theory that is reckless with respect to future years of happy life will say that the prospect of living 100 more years for sure is worse than a mere one-in-a-trillion chance of living for some finite (but perhaps truly vast) amount of time-and thus, all but certainly, dying right away. 6he third horn, which we will not much discuss, is Non-Transitivity: There are prospects , , and , such that  is better ,  is better than , but  is not better than .
sider throughout a state space  with a non-atomic probability measure (e.g. the interval [0,1] with the uniform measure).
For any event  ⊂  and finite payoff , let   be the prospect of getting  on  and nothing otherwise.Timidity can be regimented as the claim that, by any standard of closeness, there's a finite payoff  and two events  ⊂ , with close-together probabilities, such that   is not worse than any   .Recklessness can be regimented as the claim that, for any  and , some   is better than   .Everything else in this paper can be interpreted along similar lines.
While our focus is officially on theories of the value of prospects, we'll sometimes talk about reckless or timid agents as well as theories.These agents form their preferences and choose their options in line with the recommendations of a reckless or timid theory of value.We do this to make recklessness and timidity more vivid; we're not committed to the view that, in general, one ought to choose the best prospect, although we're inclined to think that this is often true.
Let's make sure it's clear why we must choose between timidity, recklessness, and nontransitivity.Start with the prospect of some finite payoff  for sure.Unless timidity is true, we can find a sequence of prospects, each better than the one before, and each with a slightly smaller probability of getting a payoff.Eventually, we can reduce the probability of the payoff from certainty down to any positive probability , no matter how small. 7By transitivity, it would be better to trade the original payoff  for the prospect of some (potentially vast) payoff  with probability .That's recklessness.
While we'll focus on the conditions just stated, it's worth noting that one can construct an analogous trilemma using negative values-that is, with outcomes that are worse than the baseline.In this version, the payoffs might instead be years of suffering, rather than years of happy life.One gets negative variants of timidity and recklessness by exchanging 'good' with 'bad' and 'better' with 'worse'.Informally, a person who is negatively timid passes up a deal that would give them much less suffering but with slightly higher probability; a person who is negatively reckless prefers many years of suffering for sure to an arbitrarily small probability of sufficiently many years more.
We think that, for many sources of value, negative recklessness and negative timidity are roughly as counterintuitive as the original positive versions, and they will lead to analogous problems.But, if anything, our arguments will weigh especially heavily against negative timidity.

EXAMPLES
We now explain how the choice between timidity and recklessness looks from the point of view of some popular approaches to evaluating prospects.We organize our discussion around three ideas that one might invoke to explain timidity, or to avoid recklessness: the boundedness of value, risk aversion, and discounting small probabilities.To be clear, this discussion isn't meant to exhaust the logical space.But it will include the best-credentialed normative theories of evaluation under uncertainty, and it will aid our discussion of the costs of timidity and recklessness.Much of the material in sections 2.1 and 2.2 (but less so 2.3) may be fairly obvious to experts, whom we encourage to skim; we hope that laying out the issues carefully may be helpful to some others.Throughout this section, we'll focus on prospects that are simple in that each one has only a finite number of possible outcomes, all corresponding to finite payoffs.We thereby rule out, to begin with, any difficulties that might arise from infinite payoffs or St Petersburg gambles of the sort we'll describe in sections 4 and 5.
7 One might worry that there is a threshold value  0 such that a sequence of slight decreases in the probability can bring it arbitrarily close to  0 , but never below it (cf.Binmore and Voorhoeve, 2003).Our formulation of timidity avoids this worry: once the probability is close to  0 (by the relevant standard), one can slightly reduce it to  0 in one step, and then slightly reduce it below  0 in another.In mathematical terminology, the fact that one can bring the probability from 1 down to  with a series of finitely many slight decreases amounts to the fact that the interval [, 1] is compact: we can cover this interval with finitely many open sets such that any two points within each of these sets are close together by the relevant standard.

The boundedness of value
Perhaps the most natural explanation for timidity is that the value of finite payoffs is bounded above.In our initial example, deal  would be worse than deal  − 1 if the payoff in deal  − 1 was already so good that it was near the upper limit of how valuable an additional number of happy years could be.Although the number of years available in deal  might be much greater, their value would not, and therefore wouldn't justify taking even a tiny additional risk.It would be strange to say (on the other hand) that the possibility of a much better outcome could not justify a slight increase in risk.
As we'll now explain, the connection between timidity and the boundedness of value is especially tight if we use expected value theory (EVT), the most common normative model for evaluating simple prospects. 8According to EVT, one prospect is better than another just in case it has higher expected value.Here, the expected value of a prospect is computed by (i) identifying all of the possible outcomes of the prospect, (ii) multiplying the probability of each outcome by its value, and (iii) adding all those terms together.So the expected value (relative to the relevant baseline) of getting a payoff  with probability  just equals  times the value of getting  for sure.
It is easy to see that, given EVT, recklessness corresponds exactly to the value of payoffs being unbounded above, in the following precise sense: For any finite payoff , and any number  > 0, there exists a finite payoff  whose value is more than  times greater than that of . 9 Similarly, timidity means that value has an upper bound, in the precise sense that there's some  and some  such that no finite payoff is more than  times better than .Perfectly analogous arguments show that negative timidity requires a lower bound on the value of finite payoffs, and negative recklessness requires that there is no such lower bound.Even if EVT is not exactly right, the boundedness of value would be a natural way to avoid recklessness.It provides a clear and, at least, prima facie plausible explanation of why some of the devil's deals might not be worthwhile.
We'll discuss some of the general problems with this idea in section 3, so for now we'll stick to two preliminary points.First, while perhaps additional years of happy life have bounded value (we are skeptical, but see Williams (1973, ch. 6) for a classic statement of this view), this is less plausible for other goods.We are especially skeptical of claims that value is bounded below, as required for negative timidity: additional years of bad life, for example, do not seem to diminish in their badness.Second, many popular theories of value do not in fact put bounds on the value of various interesting goods.For example, the most natural understanding of utilitarianism as an On one understanding, there are independently given cardinal facts about value, i.e. not only facts about which outcomes are better than which others, but also facts about how much better they are; EVT then makes sense relative to a suitable quantitative representation of those facts.But some authors (e.g.Broome, 2004, p. 90) think that the question 'how much better' is at best ambiguous, and regard EVT as providing an implicit definition, or disambiguation, of the cardinal facts.We note that the intuitive explanation of timidity in terms of the boundedness of value itself requires some cardinal facts: we want to be able to say that, at some point, increasing the payoff makes things better but not by much. 9Proof: suppose given a payoff  and a probability  > 0. Let  = 1∕.If value is unbounded, then there is a finite payoff  whose value is more than  times the value of .The expected value of getting  with probability  is  times the value of , which is therefore greater than the value of .So, given EVT, getting  with probability  is better than getting  for sure.Thus unboundedness entails recklessness; the converse is similar.
evaluative theory says that improving  lives by a given amount improves the world by  times as much as improving one life; so the value of improving lives is unbounded.Total utilitarianism and its variants likewise put unbounded value on creating good lives.10And utilitarianism is not special here: giving priority to the badly off, say, or introducing further dimensions, like equality or perfection, along which an outcome can be good or bad will not automatically place bounds on value.So there is ample motivation to consider other possible justifications for timidity.

Risk aversion
The very names 'timidity' and 'recklessness' suggest different attitudes towards risk.If value is not bounded, one might guess that risk aversion is the proper way to account for timidity.As we'll now argue, timidity can indeed be explained by extreme levels of risk aversion on some, but not all, ways of theorizing about risk attitudes.(Readers familiar with the idea of risk aversion and happy with this conclusion may wish to skip down to section 2.3.)In the end, the risk aversion seems too extreme to be plausible; that's an issue we'll return to in section 3. It is not straightforward to give an account of 'risk aversion' in an ordinary intuitive sense.However, according to a standard story, the characteristic feature of risk aversion is that one systematically judges that a sure payoff is better than an uncertain prospect with the same expected size: for example, it's better to save 10 lives for sure than to have a 1∕2 chance of saving 9 lives and a 1∕2 chance of saving 11. (This is, specifically, risk aversion with respect to the size of payoffs; we'll also discuss risk aversion with respect to their value below.)Risk seeking requires the opposite judgment; risk neutrality means that these prospects are equally good.
We'll illustrate the connection between timidity and extreme risk aversion using what we take to be the two most popular normative theories of risk attitudes.
First, expected utility theory (EUT) assigns a numerical 'utility' to each outcome, and prospects are compared on the basis of their expected utility.The utility of an outcome should depend only on the outcome's value, and better outcomes should have higher utility.11Given the similarity between EUT and EVT (we'll explain the contrast shortly) it should come as no surprise that timidity is equivalent to the claim that the utility of finite payoffs is bounded above, and negative timidity to the claim that it is bounded below.The argument is the same as before, with 'utility' instead of 'value'.What does this have to do with risk aversion?
Risk aversion, according to this first theory, means that payoffs have decreasing marginal utility: the utility function increases less rapidly with each additional unit of payoff.It is, in other words, concave.Now, concavity does not entail that the utility function is bounded above, so risk aversion does not entail timidity.But the utility function will be bounded above if it is very concave, when it comes to good outcomes-that is, if the contribution to utility of each additional unit of payoff falls off sufficiently quickly.So timidity can be explained by relatively extreme levels of risk aversion when it comes to large payoffs.Risk seeking, on the other hand, means that the utility function is convex: it increases more rapidly with each additional unit of value.Negative timidity can be explained by relatively extreme levels of risk seeking when it comes to very negative payoffs.
How is this story different from the one about expected value theory in §2.1?First of all, EUT doesn't presuppose that there are any cardinal facts about value (cf.footnote 8).But even if there are such facts, the utility function need not straightforwardly reflect them.In particular, even if the value of finite payoffs has no upper bound, their utility may.This can happen if the utility of positive payoffs is very concave as a function of their value, not just as a function of their size.In that case, getting a positive payoff for sure will be better than any uncertain prospect with the same expected value.This is 'pure' risk aversion, risk aversion with respect to value itself.And similarly, even if the value of finite payoffs has no lower bound, negative timidity can arise through pure risk seeking, when it comes to negative payoffs.
Though still very popular, the EUT account of risk aversion has often been criticized-see Buchak ( 2013) for an overview.We should therefore also consider the best-defended alternative, risk-weighted expected utility theory (REUT), developed by Buchak as a 'subjective' version of the anticipated utility theory of Quiggin (1982).According to this view, each payoff again has a numerical utility, but, properly speaking, risk attitudes are captured by a separate risk function , which is used to transform the probabilities.As Buchak puts it, [T]he concavity of the utility function, which describes how an agent evaluates outcomes, pulls apart from her attitude towards risk properly called.What we appropriately describe as an agent's attitude towards risk is captured by the shape of her risk function.(66) In the case of risk aversion, the risk function is designed to put high weight on the relatively bad outcomes of each prospect; for risk seeking, it puts high weight on the relatively good outcomes. 12ven if this is an attractive theory of risk attitudes, risk-weighted expected utility theory will not avoid recklessness unless utility is bounded above: it is no different from expected utility theory in that respect.The reason is that the risk-weighted expected utility of getting  with probability  comes out to ()() (see footnote 12).This is different from the ordinary expected utility  (), but it can nonetheless get arbitrarily large unless the utility function  is bounded above. 13oreover, this time, the boundedness of the utility function cannot be explained by appeal to risk aversion: risk aversion, properly speaking, is handled by the risk function .A bound on the utility function is more naturally interpreted as a bound on the value of outcomes, and not even extreme levels of risk aversion suffice for timidity.
It may count against timidity that it cannot be understood as risk aversion, according to this prominent theory.There is, however, a relatively natural way to make REUT more extreme, so 12 Suppose the risk function is  and the utility function is .If, according to some prospect ,  > is the probability of getting an outcome better than , and  ≥ is the probability of getting an outcome at least as good as , then the risk-weighted expected utility of  is defined as In the special case where () = , the term in square brackets is just the probability of getting an outcome as good as , and the whole sum is nothing but the expected utility of .But, in general,  is only required to be an increasing, real-valued function with (0) = 0 and (1) = 1.
13 What if () = 0? Since  is increasing, this only happens if  = 0.In some discussions, though not in her formal theory, Buchak weakens 'increasing' to 'non-decreasing', which would allow () = 0 for small values of  (cf.Buchak, 2013, ch. 2, note 36).We revisit this possibility in footnote 16. that it does lead to timidity, even with an unbounded utility function.We'll discuss that tweak under the next heading.

Discounting small probabilities
A third strategy for avoiding recklessness is Nicolausian discounting.On common formulations of this view, one simply ignores outcomes whose probabilities are smaller than some threshold. 14So, when it comes to recklessness, one will simply end up ignoring the tiny probability of an enormous payoff.Indeed, a key motivation for Nicolausian discounting is precisely to avoid recklessness and similar phenomena.Unfortunately, the common formulation of Nicolausian discounting is fatally flawed.But since something in this neighbourhood has often been considered prima facie plausible, we'll take the time to formulate a principle that avoids the most obvious pitfalls.
First, the problems.What happens if every outcome has probability below the threshold?We can't ignore all of them-it's not even clear what that would mean.This problem becomes acute when we consider prospects that involve a continuum of outcomes.For instance, consider the prospect of getting  years of happy life, where the real number  will be chosen at random somewhere between 0 and 1.Then each specific payoff has probability zero.Ordinary and risk-weighted expected utility theory have ways of dealing with this phenomenon (roughly, one replaces sums with integrals), but applying Nicolausian discounting is completely hopeless.Relatedly, Nicolausian discounting gives great importance to arbitrarily small differences between outcomes.Suppose the devil offers you ten years of happy life, with (say) probability 1∕2.Does 'ten years of happy life' count as a single payoff that you'll get with probability above the threshold, or does it cover many slightly different ways the happy years could play out, each one with probability below the threshold?On the first interpretation, Nicolausian discounting recommends accepting the devil's offer; on the second, it recommends indifference.This mismatch is implausible.Moreover, the second recommendation is patently incorrect. 15ere's a solution.Instead of giving zero weight to small-probability outcomes, give infinitesimal (i.e.merely tie-breaking) weight to outcomes that are unusually extreme in value. 16We will call this type of view tail discounting.Here is a way to cash it out.There is some probability threshold .A payoff  is in the left tail of a given prospect if the probability of getting -or-worse is less than .Similarly,  is in the right tail if the probability of getting -or-better is less than .Say that a payoff is extreme with respect to the given prospect if it is in the left tail or the right tail.Otherwise, say that  is normal.The simplest version of tail discounting is that one prospect is better than another if its expected value, conditional on getting a normal payoff, is higher; in 14 See Smith (2014) and Monton (2019) for recent proponents of this very old view.Monton coins the name 'Nicolausian discounting' after Nicolaus Bernoulli, who, in a 1714 letter, suggested that 'cases that have a very small probability must be neglected and assumed to be zero' (in Monton's translation).The view is similar to the theory of 'de minimis risks' (see Peterson (2002)), and to 'Cournot's principle' (see fn. 32 in Monton).
case of ties, use as a tie breaker the expected value conditional on getting an extreme payoff.17Roughly, then, the view is that one should ignore the tails of a distribution, except for breaking ties.
Tail discounting is somewhat more defensible than Nicolausian discounting as commonly formulated.It does not suffer from the problems described above.And it can be seen as an extreme form of risk-weighted expected value theory, meaning that defences of the latter can, to some extent, be adapted to defend tail discounting as well. 18Unfortunately, tail discounting leads to an especially extreme and implausible form of timidity ( §3.3), along with some more general problems to which we'll now turn.

THE PRICE OF TIMIDITY
We think that timidity is troubling on its face.For example, each deal offered by the devil really does seem better than the one before.But the problems don't stop there.In this section we explain four additional problems faced by timid theories.The first two of these are completely general.
The second two arise for all the particular ways of implementing timidity that we considered in section 2.Even if these latter problems may not be completely general, they illustrate the difficulty of constructing a plausible timid theory.
We'll give a brief summary at the end of the section.

Even small decreases in unlikely payoffs cannot be outweighed
In any instance of timidity, a slight decrease in the probability of a payoff cannot be outweighed by any finite increase in the size of the payoff.One might think, 'This is not so implausible, because, after all, the payoff may have been enormous to begin with; a slight decrease in the probability of an enormous payoff is still a weighty matter.'However, it turns out that timid theories are committed to the view that even a small decrease in an unlikely payoff sometimes cannot be outweighed by any finite increase in the size of a payoff that is many times more likely (though perhaps still unlikely in absolute terms).This commitment seems harder to accept.So, by considering small decreases in unlikely payoffs, we can argue against timidity, in the following way.Consider, for the sake of concreteness, the prospect  of saving 1,000 lives with probability 0.1.A theory that is timid with respect to lives saved might say that  is no worse than the prospect of saving even a vast number of lives with probability 0.099, only one percent smaller.We can imagine that the outcome of  depends on the outcome of a raffle: if the golden ticket is drawn, then a mechanism will be activated that will save 1,000 lives.Now consider a prospect  1 that differs from  in two ways: first, the mechanism is designed to save many more lives-let's say 10,000; but second, there is a one-percent failure rate, and in the fail-state,  1 saves only 999 lives.So  1 almost certainly leads to ten times as many lives saved as in , and at worst, in the highly unlikely fail-state, it will save only one life fewer.Intuitively,  1 is much better than .But now consider a prospect  2 .In  2 , the mechanism usually saves 100,000, but it only saves 998 in the failstate.So, compared to  1 , there are almost certainly 10 times as many lives saved, and at worst one fewer. 2 is clearly better than  1 , so also better than .But if we continue this sequence along, we get to  1000 , in which no lives are saved in the fail-state.Thus  1000 leads to a vast number of lives saved (namely, 10 1003 ) with probability 0.099.By transitivity,  1000 is better than .And so, the argument concludes, our theory of value must not be timid in this particular way.Nor is there anything special about this case; mutatis mutandis, we have an argument against timidity in general.

Strange dependence on distant space and time
Timidity requires our evaluation of prospects to depend in strange ways on features of the world that we cannot influence, including what happens in distant places and times, causally isolated from the here-and-now.At least, this is true insofar as the types of goods with respect to which we are timid can be realized far away without diminishing their value. 19he rough idea is easy to see if we suppose that timidity arises from the boundedness of value.How close we are to the upper bound can depend on facts about how things are in remote regions of spacetime.The value of the things we do here and now can therefore depend on those facts in dramatic ways: if we are close to the upper bound, then we can achieve only trivial improvements.But, for example, if we had some way of greatly improving the hundreds of millions of lives currently in extreme global poverty, we could not plausibly claim that this would have only trivial value just because a lot of great things had happened in remote galaxies a long time ago.
In general, timidity leads to the same kind of problem in more complicated guise.For illustration, consider the view that the existence of additional lives (of some particular high quality) makes things better.So a payoff here is some number of lives beyond the baseline.Suppose we can create additional lives in our galaxy, but we have no contact with other galaxies, and no influence on how many lives they might contain.It seems obvious that, regardless of the situation in those other galaxies, instead of creating some lives with a tiny probability, it would be better in prospect to create a trillion times more lives with a trillion times greater probability.According to timidity, though, this is not true.
To see this, consider the following prospects , , and .Relative to the baseline,  leads to the existence of  lives with probability . leads to the same number of lives with greater probability  + , and  leads to  additional lives with the original probability . 19The argument to follow is closely related to well-known arguments from 'separability' for totalist views in population ethics (see Broome, 2004; Thomas, 2021b).However, the issue for us is not separability in general-perhaps modest violations of separability would be acceptable-but the particular dramatic violations to which timidity leads.According to timidity, there will be some such case in which  is not better than , even though  is less than one trillionth the size of  and  is a trillion times the size of : the small decrease in probability from  +  to  cannot be outweighed by the enormous increase in the payoff from  to  + .But now suppose that the  lives that would exist with probability  would exist in another galaxy.The choice between , , and  has no influence on whether they exist, or on the probability that they exist, or on anything else to do with them.So the practically relevant status quo-what we get if we create no lives-is prospect .Relative to that status quo, prospect  is the prospect of creating only  lives with tiny probability , while  is the prospect of creating  lives with a much larger probability .In other words, compared to ,  is the prospect of creating a trillion times more lives with a trillion times greater probability.And yet timidity says that  is not better than .
It seems strange to us that the importance of a given probability of creating a given number of lives depends in any significant way on what might happen, entirely beyond our influence, in far-away times and places.But the specific implications of timidity in cases like the one we just described are especially extreme.
Perhaps there are types of payoffs that cannot, by their nature, be realized in remote times and places-or, alternatively, whose value should be discounted when they are realized far away. 20his would not necessarily diminish the force of the argument: our focus on distant payoffs makes the argument more vivid, but the underlying point is that it can seem strange for our evaluation of prospects to depend so radically on unaffected payoffs, even if these are realized nearby.At any rate, it seems clear to us that there are some sources of value-such as the value inherent in the best kinds of lives-that can in fact be realized far away, that should not be discounted merely because of distance, and with respect to which the argument makes timidity deeply troubling.

Extreme risk aversion in very positive outcomes
The next problem is that timidity leads to implausibly extreme forms of risk aversion, at least according to all the timid theories we surveyed in section 2. Consider first a view according to which there is an upper limit on how good finite payoffs of the relevant sort could be.For example, suppose that 10 10 extra years of happy life would take you extremely close to the upper limit to how good extra years of life could be.Now consider two prospects,  and , in which the outcome depends on the toss of a fair coin: 20 A common version of this idea is that, at least for practical purposes, one should only consider value realised in one's causal future.The problems with this suggestion have been studied elsewhere; see Broad (1914, 314-5), Hurka (1982), Rabinowicz (1989), Carlson (1995).A related idea is that one should evaluate the difference one makes to the value of the world; see Greaves et al. (2022) for objections to this idea, developed in the context of risk aversion, but generalizing to all timid views.Since 10 10 years of happy life would take you extremely close to the upper limit, the additional 70 orders of magnitude of years of happy life would represent a trivial improvement-far more trivial, we can stipulate, than the difference an extra second of misery makes to the baseline of a middling-value life.Therefore, on this approach, we should conclude that  is better than .This implausible conclusion can be interpreted as an extreme degree of risk aversion with respect to years of happy life. has a much greater expected number of years of happy life than , but it is still judged worse; this is the characteristic pattern of risk aversion.
This problem seems bound to recur even if value is not bounded.To avoid recklessness, the level of risk aversion must be very high; but then it will seem unacceptably high in other cases.In the case of EUT, we can run the same example as before, supposing that 10 10 extra years of happy life would bring you very close to the upper bound for utility.We will again conclude that  is better than .In the case of REUT, there is a complication: one could, in principle, avoid this conclusion by counteracting the bounded utility function with a risk function that is extremely risk seeking, so as to achieve something closer to risk neutrality in this particular case.But then the theory will be implausibly risk seeking in more ordinary cases.
Finally, what about tail discounting?Tail discounting has an advantage over the views we have just considered.Our example of extreme risk aversion seems particularly strange because the probabilities involved-those of a fair coin landing heads or tails-are middling in value, quite different from the exotically small probabilities that seem most relevant to the dilemma between timidity and recklessness.In such middling-probability cases, tail discounting will tend to agree with expected value theory, leading to generally plausible results.But it will still lead to strange results when we consider probabilities close to the threshold.Specifically, tail discounting implies the following condition that is both stronger and stranger than ordinary timidity, and which isn't implied by the other timid theories we considered.
Threshold timidity: There is some positive probability threshold such that, for any finite, positive payoffs  and , getting  with probability below the threshold is never better than getting  with probability above the threshold-no matter how much better  is than  and no matter how close together the two probabilities may be.
So, for example, the prospect of creating 10 80 lives with some particular positive probability, one trillionth of a percent below the threshold, is no better than the prospect of creating only one life with a probability two trillionths of a percent greater.Roughly speaking, threshold timidity tells us that, close to the threshold, decreasing risk is infinitely more important than increasing expected value.
As one might guess, negative timidity is associated with extreme risk seeking for prospects with very bad outcomes.This makes negative timidity especially implausible.For example, the negative variant of threshold timidity says that, close to the threshold, increasing risk is infinitely more important than increasing expected value.This does not strike us as a tenable view.

Long-shots
Recklessness suggests that we should value some extreme long-shot bets in a way that often seems counterintuitive.However, all the timid views we considered in section 2 also recommend certain long-shot bets, so the purported advantages of timidity are undermined.This is most obvious for tail discounting.Suppose for concreteness that the probability threshold is 0.00001.Then we avoid recklessness, but, if the value of finite payoffs is unbounded, it will still be true that any finite payoff for sure, no matter how good, is worse than a 0.000011 probability of some other finite payoff.But a long-shot bet with a 0.000011 probability of a payoff is not really less objectionable than one with a 0.00001 probability.There seems to be no way to set the threshold so that tail discounting rules out all and only the objectionable long-shots. 21ere's an example of the type of longshots that are encouraged when value or utility is bounded above.
Lingering doubt: In Utopia, life is extremely good for everyone, society is extremely just, and so on.Their historians offer a reasonably well-documented history where life was similarly good.However, the historians cannot definitively rule out various unlikely conspiracy theories.Perhaps, for instance, some past generation 'cooked the books' in order to shield future generations from knowing the horrors of a past more like the world we live in today.
Against this background, let us evaluate two options: one would modestly benefit everyone alive if (as is all but certain) the past was a good one; the other would similarly benefit only a few people, and only if the conspiracy theories happened to be true.
In this case, the second option might yield a better prospect.Why?If the conspiracy theories are not true, then the status quo is already about as good as it could be, and the benefits to everyone alive would make almost no difference to the value (or utility) of the world.But if the conspiracy theories are true, and the world overall is middling in value or utility, then benefiting relatively few people would make a relatively large difference.
Of course, what's strange about this case is not only that it involves seemingly undue attention to a highly unlikely scenario.Contrary to the timid analysis of the case, we feel that modestly benefitting everyone alive would in fact be a weighty consideration, even in the presence of some risk.

Itemized billing for the timid
To summarize, timid theories have the following features that seem problematic for many types of goods.The first three are general.
21 Perhaps this problem can be mitigated by allowing the probability threshold to be vague, to match the way in which it is vague which long-shots are objectionable.See Peterson (2002, p. 53) for this idea in the context of de minimis risk, and Thomas (2021a) for vagueness as a general response to spectrum arguments; we won't press the issue here.1.They pass up at least one of the devil's seemingly great low-risk trades (or similar trades for other types of goods).2. They must claim that, in some cases, even a small decrease in one unlikely payoff cannot be outweighed by any increase in some other payoff that is many times more likely ( §3.1). 3. Their ranking of options is sensitive to unaffected features of the world, including-if these are not somehow discounted-payoffs realized in the far removes of space and time ( §3.2).
The other features are had by all the timid theories we have identified, and it is hard to see how we could avoid them; but, for all that, they may not be completely general, and avoiding them would be an important formal achievement.4. They rank some prospects in an extremely risk-averse way; in the case of negative timidity, they rank other prospects in an extremely risk-seeking way ( §3.3). 5. Like reckless theories, they still recommend betting on certain kinds of counterintuitive longshots ( §3.4).
Analogues of all of these problems arise for negative timidity as well; as we mentioned, the extreme risk seeking associated with negative timidity seems especially implausible.
All together these problems show that justifying timidity in a plausible way is a serious challenge.But recklessness will also have its costs.

RECKLESSNESS AND DOMINANCE
There is some hope that the basic implausibility of recklessness might be open to debunking.It is hard to comprehend payoffs that are getting arbitrarily large, so perhaps intuition is not to be trusted in these cases. 22And our intuitions might also be confused by all kinds of confounding factors that we would expect to see in practical situations where the stakes are very high and one's evaluation of prospects is sensitive to miniscule absolute changes in probabilities: it's natural to start worrying about the reliability of one's evidence and one's cognitive facilities, one's powers of introspection, whether one is being tricked or one is simply hallucinating, and so on.Perhaps in the unusually simple and clearly specified cases that recklessness strictly speaking involves, 'reckless' behavior really would be reasonable.Still, the case against recklessness can be pressed, and that's what we'll do in this section and the next.Here, we'll argue that recklessness leads to violations of a very compelling dominance principle.Next, we'll argue that (under some further assumptions) recklessness leads to the singleminded pursuit of infinite payoffs.
In the context of expected utility theory, we know that recklessness corresponds to the use of an unbounded utility function.And there is a large literature that focuses on the problems that arise from an unbounded utility function, centered around the St Petersburg gamble.It turns out that at least some of these problems arise directly from recklessness, given background assumptions that are much weaker than the assumption of expected utility theory.Given that the alternative is timidity, the problems are deeper and more general than usually supposed.That's what we'll now explain.
Recall that the St Petersburg gamble, in its modern guise, offers a 1∕2 chance of 2 units of utility, a 1∕4 chance of 4 units, a 1∕8 chance of 8 units, and in general a 1∕2  chance of 2  units.The expected utility of this prospect is infinite.23Some of the problems associated with the St Petersburg gamble involve interpreting this infinity in a sensible way.As but one example, the 'Petrograd' gamble (Colyvan, 2008) is like St Petersburg except that it invariably pays off one extra unit of utility.It is presumably better than St Petersburg, but it apparently has 'the same' infinite expected utility.The more fundamental problem, however, is that the St Petersburg gamble leads to violations of some extremely attractive normative principles which are often themselves assumed in axiomatic approaches to expected utility theory.
Specifically, consider the following principle: Prospect-outcome dominance: If prospect  is strictly better than each possible outcome of prospect , then  is strictly better than .
The St Petersburg gamble is better than any finite payoff. 24Therefore, it violates prospectoutcome dominance: it is strictly better than each of its own outcomes, but it is not strictly better than itself.The strangeness of violating prospect-outcome dominance may be more vivid if we consider two independent St Petersburg gambles,  and .No matter how  turns out, one would trade the result for .Yet we cannot conclude that  is better than . 25 The problem described so far depends on expected utility theory.Unfortunately, a similar problem arises for all reckless views.If we accept recklessness, we must deny either prospect-outcome dominance or a different dominance principle which we think is even more compelling: Outcome-outcome dominance: If, no matter what, the outcome of  would be at least as good as the outcome of , then  is at least as good as . 26deed, we can use recklessness to construct a 'generalized St Petersburg gamble' in the following way.Take any payoff  0 .By recklessness, there is a payoff  1 such that a 1∕2 chance of  1 is better than  0 for sure.Similarly, there is a payoff  2 such that a 1∕4 chance of  2 is better than  1 an outcome that is at least as good as that of .Some decision-theoretic frameworks do not formally represent prospects using states, but it should still be possible informally to specify prospects by their effects in different states, and to assert outcome-outcome dominance for prospects so specified.Alternatively, one can replace outcome-outcome dominance in the following discussion by the notionally stronger but also very popular condition known as 'stochastic dominance' that does not refer to states.
for sure.And so on, obtaining a sequence of payoffs  1 ,  2 ,  3 , …, such that a 1∕2  chance of   is better than  −1 for sure.Now define the generalized St Petersburg gamble  to be one that offers a 1∕2  chance of   , for every  = 1, 2, 3, ….
For each , outcome-outcome dominance entails that  is at least as good as simply getting the same 1∕2  chance of   and nothing otherwise.And by construction this is better than getting  −1 for sure.So, by transitivity,  is better than every one of its possible outcomes, violating prospect-outcome dominance. 27e think that prospect-outcome dominance and (especially) outcome-outcome dominance are both compelling principles.It's hard to fathom why they should be violated in the clean kinds of cases we have in mind, where the value of outcomes is apparently the only thing at stake and there are no violations of rights or objectionable unfairness or other complicating factors.Moreover, the timid theories that we discussed in section 2 are immune to these problems.This is a serious strike against recklessness.

RECKLESSNESS AND INFINITY OBSESSION
Now we introduce a problem that arises for reckless theories when there is a possibility of an infinite payoff.Although theorizing about infinite cases in ethics is notoriously difficult (see Bostrom  (2011) for an entry to the literature), the timid approaches we surveyed in section 2 don't lead to the particular problem we identify here.That speaks in favour of timidity, and against recklessness.
While in philosophy and in life we usually don't think about the possibility of achieving infinite payoffs, some philosophers, such as Pascal, claim that such infinite considerations should be decisive, at least in theory. 28More precisely, these people claim: Infinity obsession: Any non-zero probability, no matter how small, of an infinite payoff, is better than any finite payoff for sure. 29roughout this discussion, by 'an infinite payoff' we will mean specifically a positively infinite payoff, like an infinite number of years of happy life, or an infinite number of lives saved, or something equally good.For example, in Pascal's case, infinity obsession arises from the claim that an arbitrarily small chance of eternity in heaven is better than any finite reward.But of course the negatively reckless will face similar problems with respect to negatively infinite payoffs.
In this section, we argue that, given minimal assumptions, reckless agents must be infinityobsessed ( §5.1), and that, given slightly stronger assumptions, recklessness leads to other, potentially even stranger forms of obsession ( §5.2).
First, though, let us make sure the differences between recklessness and infinity obsession are quite clear.Of course, only the latter involves infinite payoffs, but there is another difference too.
Though the reckless are willing to take some extreme risks for finite rewards, their willingness to take a risk for any given reward depends on the probability involved.When it comes to infinite rewards, the infinity-obsessed have no such concern.Any non-zero probability of an infinite reward seems better to them than any finite reward for sure.So only definitive evidence that the infinite reward is impossible will persuade them to take the finite one instead.Such obsession can have revisionary and disturbing practical implications-a point we'll draw out in §5.3.

How recklessness leads to infinity obsession
Despite their differences, recklessness leads to infinity obsession, given only very weak assumptions.For example, suppose we have an agent who is reckless about the number of happy years of life he has.Compared to getting any finite number of years of happy life for sure, he'd prefer even a tiny chance of living for a sufficiently long but finite time.A fortiori, he'd prefer the same tiny chance of living for infinitely long.So he'd prefer any tiny chance of living forever to any finite number of years of happy life for sure.In other words, he'll be infinity-obsessed.
More formally, the premises we need to derive infinity obsession are recklessness, transitivity, outcome-outcome dominance, and the claim that an infinite payoff is better than a finite one. 30The argument goes like this.Consider a prospect  that gives the agent an infinite payoff with probability  > 0, let  be a prospect that gives the agent some finite payoff  with that same probability , and let  be a prospect that gives the agent some other finite payoff  for sure.We have to show that  is better than .According to recklessness, there's some value of  that would make  better than .But an infinite payoff is better than a finite payoff, so, by outcome-outcome dominance,  is at least as good as .Therefore, by transitivity,  is better than  (cf.fn.27).

Other forms of obsession
We've formulated infinity obsession in a way that ties it very closely to recklessness, as the preceding argument shows.But slightly stronger assumptions lead from recklessness to other forms of obsession, which may be even more troubling.To illustrate, suppose we accept Eventwise dominance: If, on the supposition that some proposition  is true, prospect  is at least as good as , and, on the supposition that  is false,  is better than , then  is better than .
Although eventwise dominance, a version of the sure thing principle, is closely associated with expected utility theory, the particular applications we will make of it are relatively uncontroversial; for example, they are compatible with the risk-weighted expected utility theory we described in 30 It's true that, in section 4, we argued that recklessness requires one to deny either outcome-outcome dominance or prospect-outcome dominance.We think it would be quite remarkable to reject the particular application of outcomeoutcome dominance that follows.Be that as it may, the logical point is that if one rejects only the slightly less compelling condition of prospect-outcome dominance, then one still falls prey to infinity obsession.section 2. 31 Here are two further types of obsession that will ensnare the reckless if they accept eventwise dominance.
First, infinity obsession compares a prospect with some chance of resulting in an infinite payoff to a prospect with no such chance: it says the former is always better.The following condition suggests, more generally, that a higher chance of an infinite payoff is always better.

Generalized infinity obsession:
Getting an infinite payoff with some probability  and nothing otherwise is better than getting the same infinite payoff with any smaller probability  and a finite payoff  otherwise-no matter how good  may be.
A bit roughly, someone who is infinity-obsessed in this generalized sense is obsessed with increasing the probability of infinite payoffs, completely heedless of finite considerations. 32lthough generalized infinity obsession is stronger than plain old infinity obsession, one can argue for the stronger claim from the weaker.Suppose that  is the prospect of getting the infinite payoff with probability  and nothing otherwise, and  is the prospect of getting the infinite payoff with probability  and  otherwise.First, it seems harmless to assume that,  being higher than ,  results in the infinite payoff whenever  does.(If one did resist this step, the conclusion would be only slightly weaker.)Now suppose that  and  both result in the infinite payoff.Then they turn out equally well; on that supposition,  and  are equally good.On the contrary supposition,  results in the finite payoff , and  either results in the infinite payoff or in nothing.By infinity obsession,  is better than , on this supposition.Therefore, by eventwise dominance,  is better than .
Second, here is a troubling kind of obsession that involves only finite payoffs.In section 4, we showed how to construct a 'generalized St Petersburg gamble' using recklessness and only finite payoffs.An argument very similar to the one for infinity obsession, but using eventwise dominance in place of outcome-outcome dominance, shows that reckless theories are likely to be 'St Petersburg-obsessed': St Petersburg obsession: Any non-zero probability, no matter how small, of facing a generalized St Petersburg gamble, is better than any finite payoff for sure.
We'll omit the argument for the sake of space. 33f course, one can try to argue in similar ways for forms of obsession that apply in a wider range of cases, using similar or stronger dominance principles.The intention of our current arguments is simply to show that there are a range of troubling obsessions that a reckless theory will have difficulty avoiding in a plausible and principled way.

5.3
What are the implications of infinity obsession?
Finally, let us explain why we find infinity obsession and its kin so troubling, especially with regard to their practical implications.
We agree with the burgeoning literature on infinite ethics that various kinds of infinite payoffs are at least epistemically possible, given our current knowledge of cosmology (let alone theology).And it seems hard to claim that our actions have exactly zero effect on the probability of an infinite payoff.So it might seem that generalized infinity obsession must have drastically revisionary implications, urging us to divert all our resources into affecting this probability.Alternatively, perhaps we are thrust into a paralysing state of 'cluelessness'-in many cases it just seems impossible to assess how our actions affect the probability of an infinite payoff, which is effectively the only thing that matters. 34e think these are serious practical worries.One way to mitigate them would be to appeal to what Bostrom calls an empirical stabilizing assumption-an empirical assumption which, if true, means that what's best conditional on achieving a finite payoff and what's best overall are broadly equivalent.Specifically, one might argue that whatever best promotes the long-term survival and flourishing of our descendants in the distant future will generally be what's best by either standard.It will do well with respect to finite payoffs because of the enormous (even if finite) number of potentially good lives in the future. 35And it will do well overall because flourishing future civilisations will be relatively well-placed to achieve infinite payoffs, if it's possible at all.Indeed, it's unclear how we could realistically promote infinite payoffs without enabling our descendants in more general ways. 36While there is much more to think about here, it does seem that, even if recklessness is true, there might be significant overlap between what's best with respect to finitary considerations and what's best overall, in our current situation.
However, even if some empirical stabilizing assumption meant that taking infinite payoffs into account would not greatly change our evaluation of the prospects open to us, it would significantly change the reasons behind these evaluations, and in strange ways.The main reason it would be good to prevent climate change might be that it would increase the chance that future people find some way of achieving an infinite amount of value, rather than the fact that lots of future generations would be better off in some much more likely, merely finite way.Prudentially, the main reason it would be good to avoid smoking might be that it increases your odds of living long enough that someone discovers a way to give you infinite value; 'lung cancer is an awful experience' would be a relatively minor consideration.Infinity obsession would remain revisionary in this way.
Finally, situations might arise in the distant future in which it is clear that no empirical stabilizing assumption would apply.In some of those cases, infinity obsession still seems deeply troubling.Here is a stylized illustration. 34See Greaves (2016) for a discussion of cluelessness in a finite context.The situation is worse for infinity-obsessed agents, because often an inconceivably small change in probability will make a decisive difference.A related thought is that the epistemic probability of achieving an infinite payoff will often be 'imprecise'.Here a serious worry is that imprecision about the probability of an infinite payoff will tend to mean that almost any option is permissible-see Mogensen (2020)  for a version of this worry for high-stakes finitary cases. 35 Parfit (2011, chapter 36), Bostrom (2003, 2013) and Beckstead (2019) all argue for this conclusion.
36 For a similar line of thought, more fully developed, see Williams (2013).
Infinite research vs. utopia: Our descendants reach the limits of technological progress and become very confident, but not certain, that it's impossible to achieve an infinite payoff.(And, indeed, it is impossible.)They must decide how some vast amount of resources should be allocated between two projects: creating an extremely good (but only finite) utopia, or researching possible methods of achieving an infinite payoff.
It is likely that if these people were infinity-obsessed, they would spend nearly all of the resources on the infinite research; they would keep becoming more and more certain that their research would bear no fruit; and they would keep going at it as long as they didn't become completely certain that achieving an infinitely good outcome was impossible-which perhaps they never would.

RECKLESSNESS AND NORMATIVE UNCERTAINTY
The problems discussed in the last two sections provide reasons to accept timidity over recklessness.In this section, we present an argument that, even if timidity is true, rational agents should still be reckless.In a sense, this undermines the case for timidity: we must face up to the problems of recklessness no matter what.However, the argument depends on a particular (though popular) view about normative uncertainty, and the problems of recklessness can instead be seen as raising an objection to that view.More generally, they raise a challenge for constructing an adequate theory of normative uncertainty.Let us step back for a moment to make sure it is clear what we mean by normative uncertainty.In our context, the issue is: how should your preferences reflect your credences about which theory of value is true?Here is an example.
Suppose that you have some credence in utilitarianism and some credence in a pluralistic form of egalitarianism, as theories of value.Should you then prefer a more equal outcome to a less equal one when egalitarianism says that it is better but utilitarianism says that they are equally good?
For our purposes, one can understand the 'should' as a matter of coherence between your beliefs and your preferences.The question is whether you could coherently have the credences described while also (for example) being entirely indifferent about equality.A theory of normative uncertainty would answer this question and others like it.
Some philosophers have argued that we should treat uncertainty about normative or evaluative matters in essentially the same way that we should treat empirical uncertainty, and specifically that we should use expected utility theory in both cases. 37After all, the axiomatic basis of expected utility theory is prima facie plausible regardless of what type of uncertainty is at stake.
Here is the problem that arises from this view of normative uncertainty.Suppose that each of the theories   in which you have some credence ranks simple prospects by expected utilityeach one with respect to a different utility function   .Suppose your preferences also rank simple 37 For early examples, see Lockhart (2000), Ross (2006), Sepielli (2010); see MacAskill et al. (2020) for a recent book-length treatment, and Riedener (2020) for a version of the axiomatic approach mentioned in the next sentence.
prospects by expected utility, with respect to yet another utility function .Finally, suppose that your preferences over simple prospects satisfy the following Pareto assumption: If every theory   in which you have some credence says that  is at least as good as , then you weakly prefer  to  (that is, you either strictly prefer  to  or you are indifferent between them).
If, in addition, one of the   says that  is better than , then you strictly prefer  to .
(So, in the preceding example, you do prefer the more equal outcome to the less equal one.)Given these three assumptions, we can now argue that if any of the   are reckless, your preferences must be reckless as well. 38Indeed, according to Harsanyi's aggregation theorem, the three assumptions imply that  is a weighted sum of the   :39  =  1  1 +  2  2 + ⋯ +     for some positive real numbers   .Now, if any one of the   is reckless, then   is unbounded.It follows that  is also unbounded.And therefore your preferences are reckless.
Despite the drawbacks of recklessness, it is hard to deny that we should have some credence in reckless theories of value, given the challenges faced by the alternatives, and given that, on many standard theories, the value of finite payoffs is unbounded.It is therefore hard to deny that, if empirical and normative uncertainty are both governed by expected utility theory, then one's preferences should be reckless.Moreover, since advocates of expected utility theory typically endorse outcome-outcome dominance and eventwise dominance as core commitments, we seem bound to fall into generalized infinity obsession and St Petersburg obsession as well.
As we mentioned, one could reasonably see this conclusion as an objection to the use of expected utility theory in this context.After all, some quite different approaches to normative uncertainty will avoid the problem altogether.Gustafsson and Torpman (2014) defend 'My Favourite Theory', which in this context would be the view that your preferences should match the judgments of the evaluative theory in which you have highest credence; they reject both expected utility theory and the Pareto condition.And normative externalists, like Weatherson (2019), might claim that there simply aren't interesting norms in this area (let alone ones that would require reckless preferences).So our argument could be taken to lend support to such alternative approaches, or simply as raising an important problem for those attracted to expected utility theory and similar views.This problem is closely related to the common claim that one's preferences will tend to be determined by theories that see the options as involving high stakes; see e.g.Ross (2006), Greaves  and Ord (2017), MacAskill et al. (2020, ch. 6) for discussions.Our argument focuses on the specific phenomenon of recklessness, and relies on an unusually minimal view about normative uncertainty-just the axioms of expected utility theory and the Pareto assumption.In particular, we have not had to say anything about intertheoretic comparisons, a deep problem for normative uncertainty that goes back at least to Hudson (1989).

CONCLUSION
In summary, as far as the evaluation of prospects goes, we must be willing to pass up an arbitrarily large increase in the size of a payoff to prevent a tiny decrease in the probability of obtaining it (timidity); be willing to give up any sure thing, no matter how good, for an arbitrarily tiny chance of enormous gain (recklessness); or be willing to rank prospects in a non-transitive way.All options seem deeply unpalatable, so we are left with a paradox.Of course, as we argued in section 6, the paradox may effectively resolve itself: on some views about normative uncertainty, we are bound to have reckless preferences, even if our credence in reckless theories is vanishingly small.But this result is unpalatable in itself.
If we accept timidity, we must think that sometimes even a small reduction in an unlikely payoff cannot be outweighed by any finite increase in some payoff that is many times more likely.We must worry about payoffs we cannot affect, including, if these are not somehow discounted, payoffs realized in the distant reaches of space and time.We are likely to endorse extreme risk aversion in some cases and, when it comes to negative timidity, extreme risk seeking in others.And, for all that, it is not clear we can avoid favouring objectionably long-shot bets.
If we accept recklessness, then we must deny some compelling dominance principles, like prospect-outcome dominance.And we are likely to be infinity-obsessed, pursuing any chance of an infinite payoff, or even any chance of a generalized St Petersburg gamble, at any finite expense.
Some may see all this as another argument for non-transitivity.We are not inclined toward this resolution, but we think that increasing one's confidence in this position is a fair reaction to the arguments, given that new challenges have been presented for other approaches, but not for this one.

A C K N O W L E D G M E N T S
This paper was originally drafted by the first author based on his dissertation (Beckstead, 2013,  chapter 6).The second author is responsible for most of the revisions and ideas newly appearing in this version.We are grateful to many people for feedback and assistance, including Amanda Askell, Andreas Mogensen, Christian Tarsney, David Thorstad, Elliott Thornley, Hayden Wilkinson, Hilary Greaves, Jacob Barrett, Larry Temkin, Petra Kosonen, Philipp Schoenegger, Stefan Riedener, Theron Pummer, Tomi Francis, Will MacAskill, and several anonymous referees.