In no particular order.

DISCUSSION

# Unsharp Sharpness

Article first published online: 16 JUL 2013

DOI: 10.1111/theo.12025

© 2013 Stiftelsen Theoria

Additional Information

#### How to Cite

Sahlin, N.-E. and Weirich, P. (2014), Unsharp Sharpness. Theoria, 80: 100–103. doi: 10.1111/theo.12025

#### Publication History

- Issue published online: 12 JAN 2014
- Article first published online: 16 JUL 2013

- Abstract
- Article
- References
- Cited By

### Keywords:

- imprecise probabilities;
- sequential choice;
- rational choice

### Abstract

In a recent, thought-provoking paper Adam Elga ((2010) argues against unsharp – e.g., indeterminate, fuzzy and unreliable – probabilities. Rationality demands sharpness, he contends, and this means that decision theories like Levi's (1980, 1988), Gärdenfors and Sahlin's (1982), and Kyburg's (1983), though they employ different decision rules, face a common, and serious, problem. This article defends the rule to maximize minimum expected utility against Elga's objection.

In a recent, thought-provoking paper Adam Elga (2010) argues against unsharp – e.g., indeterminate, fuzzy and unreliable – probabilities. Rationality demands sharpness, he contends, and this means that decision theories like Levi's (1980, 1988 1988), Gärdenfors and Sahlin's (1982 1988), and Kyburg's (1983), though they employ different decision rules, face a common, and serious, problem.

Elga constructs a sequential decision problem involving two bets. *Bet A*: If H is true you lose $10. If H is false you win $15. *Bet B*: If H is true you win $15. If H is false you lose $10. H can be any humdrum proposition – say the claim, to recycle Savage's (1972, p. 27) example, that the next president of United States will be a Democrat.

The decision maker has to make a sequence of decisions. First, she can accept or reject A. After making that decision she has to make a second choice: accept or reject B. It takes but a few seconds to see that if both bets are accepted the player is guaranteed a win of $5.

Assume, however, that the decision maker is dealing in unsharp probabilities, and that her degree of belief that H can be represented by a convex set of probability functions and thus pictured as a probability interval. Given the knowledge and beliefs she has, let us say the probability of H falls in the closed interval [0.2, 0.8]. And assume utilities mirror dollars.

The expected utilities of bet A will now range between –$5 and $10. How do we make decisions with expected utility intervals? Maximizing expected utility (mEU) will not help in cases like this; we cannot maximize over intervals or a set of probability measures. Unsharp probabilities require a different type of decision rule. For what are ultimately idiosyncratic reasons, we choose the following simple rule that Gärdenfors and Sahlin (1982 1988) present: *M*aximizing *m*inimal *E*xpected *U*tility (MmEU). Using this rule, or any other reasonable rule, to resolve Elga's problem defeats his argument against imprecise probabilities. For his argument contends that no reasonable decision rule using imprecise probabilities handles the problem.

In a binary choice situation where one is deciding whether to accept or reject bet A, MmEU recommends A's rejection. That rejection has a maximal minimal expected utility of nothing (i.e., the status quo), and this is clearly better than a maximal minimal expected utility of –$5. However, because B has the same expected utility interval as A, in a binary choice situation where one is pondering whether to accept or reject bet B, MmEU similarly counsels B's rejection. And this, says Elga, shows that unsharp probabilities yield irrational decisions. Rejecting both gambles means one gets nothing, whereas accepting both A and B delivers a certain win of $5.

An agent deciding simultaneously about A and about B has the option of accepting both gambles and the option of rejecting both gambles because a combination of compatible options at a time is also an option at the time. In a synchronic version of Elga's problem, MmEU prohibits rejecting both gambles because the minimum expected utility of accepting both gambles is greater than the minimum expected utility of rejecting both gambles. However, because Elga's problem is diachronic, MmEU must settle separately a decision about each gamble. It cannot prohibit rejecting both gambles by treating a pair of options at different times as a single option. This article shows how MmEU applied separately to each decision manages to prohibit rejecting both gambles.

Let us write the decision problem as a decision tree, as in Figure 1. And let us also assume that A is accepted. The decision maker can now either accept or reject B. If she accepts B she receives $5 for sure. If she rejects B she plays bet A (which she initially accepted). The mEU of A is –$5. A guaranteed $5 is better than a mEU of –$5. Maximizing minimal expected utility she therefore chooses the strategy accept–accept rather than accept–reject. If, on the other hand, she rejects A, then also rejecting B gives her nothing, while accepting B has a mEU of –$5. Therefore here the strategy reject–reject is preferable to reject–accept. Putting things together as they should be put together, we can now see that accept–accept is better than reject–reject. And that is what MmEU recommends. MmEU prescribes a perfectly rational choice – in fact, the choice that Elga wants us to take. And this remains true regardless of what state of epistemic uncertainty the decision maker is in.

Elga claims that a rational ideal agent will not reject both gambles. An agent with shortcomings may rationally reject both gambles if the shortcomings furnish a good excuse for that behaviour. Elga's argument against imprecise probabilities maintains that no reasonable decision rule using such probabilities prohibits an ideal agent's rejecting both gambles. To defeat Elga's argument, MmEU need not prohibit rejecting both gambles if the agent does not know the decision rule she follows, or if the agent is a couple in which each member decides about one gamble but does not know the decision rule that the other member follows.

We do not say that choices with unsharp probabilities are unproblematic, or that the types of generalized Bayesian decision theories under attack are flawless. On the contrary, introducing unsharp probabilities and generalizing the traditional theories of Savage (1972) and Ramsey (1990) means giving up one or other classical axiom. Seidenfeld (1988) has shown that generalized theories giving up the independence axiom can make counterintuitive recommendations in sequential choice situations. And we know that giving up ordering assumptions can also lead to unwanted recommendations. Explanatory power, or rational resilience, comes at a cost. Elga's problem does not add to what we already knew.

If we want to be classical agents, we have to stick to the classical axioms (Sahlin, 2012) and make sure our probabilities are sharp. But the simple fact is that in many cases the information and knowledge we possess do not allow us to assign sharp probabilities. Pretending that the information we have is better than it is, rather than honestly taking the epistemic uncertainty into account, is not very rational; nor is it a Socratic decision strategy.

### References

- 2010) “Subjective Probabilities Should Be Sharp.” Philosophers' Imprint 10: 1–11. (
- 1982) “Unreliable Probabilities, Risk Taking, and Decision Making.” Synthese 53: 361–386. Repr. in and , Decision, Probability, and Utility: Selected Readings (1988), pp. 313–334. Cambridge: Cambridge University Press. and (
- 1983) “Rational Belief.” Behavioral and Brain Sciences 6: 231–273. (
- 1980) The Enterprise of Knowledge. Cambridge, MA: MIT Press. (
- 1988) “On Indeterminate Probabilities.” In P. Gärdenfors and N.-E. Sahlin , Decision, Probability, and Utility: Selected Readings (1988), pp. 286–312. Cambridge: Cambridge University Press. (
- 1990) “Truth and Probability.” In D. Mellor (ed.), Philosophical Papers: F. P. Ramsey, pp. 52–94. Cambridge: Cambridge University Press. (
- 2012) “Unreliable Probabilities, Paradoxes, and Epistemic Risks.” In S. Roeser , R. Hillerbrand , P. Sandin and M. Peterson (eds), Handbook of Risk Theory, pp. 477–498. Dordrecht: Springer. (
- 1972) The Foundations of Statistics (2nd edn). New York: Dover. (
- 1988) “Decision Theory without ‘Independence’ or without ‘Ordering’: What Is the Difference?” Economy and Philosophy 4: 267–315. (