This paper studies the introduction of electronic voting technology in Brazilian elections. Estimates exploiting a regression discontinuity design indicate that electronic voting reduced residual (error-ridden and uncounted) votes and promoted a large de facto enfranchisement of mainly less educated citizens. Estimates exploiting the unique pattern of the technology's phase-in across states over time suggest that, as predicted by political economy models, it shifted government spending toward health care, which is particularly beneficial to the poor. Positive effects on both the utilization of health services (prenatal visits) and newborn health (low-weight births) are also found for less educated mothers, but not for the more educated.

]]>Is African politics characterized by concentrated power in the hands of a narrow group (ethnically determined) that then fluctuates from one extreme to another via frequent coups? Employing data on the ethnicity of cabinet ministers since independence, we show that African ruling coalitions are surprisingly large and that political power is allocated proportionally to population shares across ethnic groups. This holds true even restricting the analysis to the subsample of the most powerful ministerial posts. We argue that the likelihood of revolutions from outsiders and coup threats from insiders are major forces explaining allocations within these regimes. Alternative allocation mechanisms are explored. Counterfactual experiments that shed light on the role of Western policies in affecting African national coalitions and leadership group premia are performed.

]]>We examine the link between the threat of violence and democratization in the context of the Great Reform Act passed by the British Parliament in 1832. We geo-reference the so-called Swing riots, which occurred between the 1830 and 1831 parliamentary elections, and compute the number of these riots that happened within a 10 km radius of the 244 English constituencies. Our empirical analysis relates this constituency-specific measure of the threat perceptions held by the 344,000 voters in the Unreformed Parliament to the share of seats won in each constituency by pro-reform politicians in 1831. We find that the Swing riots induced voters to vote for pro-reform politicians after experiencing first-hand the violence of the riots.

]]>We formalize the Keynesian insight that aggregate demand driven by sentiments can generate output fluctuations under rational expectations. When production decisions must be made under imperfect information about demand, optimal decisions based on sentiments can generate stochastic self-fulfilling rational expectations equilibria in standard economies without persistent informational frictions, externalities, nonconvexities, or strategic complementarities in production. The models we consider are deliberately simple, but could serve as benchmarks for more complicated equilibrium models with additional features.

]]>We study social dilemmas in (quasi-) continuous-time experiments, comparing games with different durations and termination rules. We discover a stark qualitative contrast in behavior in continuous time as compared to previously studied behavior in discrete-time games: cooperation is easier to achieve and sustain with deterministic horizons than with stochastic ones, and end-game effects emerge, but subjects postpone them with experience. Analysis of individual strategies provides a basis for a simple reinforcement learning model that proves to be consistent with this evidence. An additional treatment lends further support to this explanation.

]]>We consider empirical measurement of equivalent variation (EV) and compensating variation (CV) resulting from price change of a discrete good using individual-level data when there is unobserved heterogeneity in preferences. We show that for binary and unordered multinomial choice, the marginal distributions of EV and CV can be expressed as simple closed-form functionals of conditional choice probabilities under essentially unrestricted preference distributions. These results hold even when the distribution and dimension of unobserved heterogeneity are neither known nor identified, and utilities are neither quasilinear nor parametrically specified. The welfare distributions take simple forms that are easy to compute in applications. In particular, average EV for a price rise equals the change in average Marshallian consumer surplus and is smaller than average CV for a normal good. These nonparametric point-identification results fail for ordered choice if the unit price is identical for all alternatives, thereby providing a connection to Hausman–Newey's (2014) partial identification results for the limiting case of continuous choice.

]]>We characterize a generalization of discounted logistic choice that incorporates a parameter to capture different views the agent might have about the costs and benefits of larger choice sets. The discounted logit model used in the empirical literature is the special case that displays a “preference for flexibility” in the sense that the agent always prefers to add additional items to a menu. Other cases display varying levels of “choice aversion,” where the agent prefers to remove items from a menu if their ex ante value is below a threshold. We show that higher choice aversion, as measured by dislike of bigger menus, also corresponds to an increased preference for putting off decisions as late as possible.

]]>Many violations of the independence axiom of expected utility can be traced to subjects' attraction to risk-free prospects. The key axiom in this paper, negative certainty independence ([Dillenberger, 2010]), formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy negative certainty independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how to evaluate a given lottery *p*; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of *p* with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well defined sense. We show that our representation can also be derived from a “cautious” completion of an incomplete preference relation.

This paper presents a new method for the analysis of moral hazard principal–agent problems. The new approach avoids the stringent assumptions on the distribution of outcomes made by the classical first-order approach and instead only requires the agent's expected utility to be a rational function of the action. This assumption allows for a reformulation of the agent's utility maximization problem as an equivalent system of equations and inequalities. This reformulation in turn transforms the principal's utility maximization problem into a nonlinear program. Under the additional assumptions that the principal's expected utility is a polynomial and the agent's expected utility is rational in the wage, the final nonlinear program can be solved to global optimality. The paper also shows how to first approximate expected utility functions that are not rational by polynomials, so that the polynomial optimization approach can be applied to compute an approximate solution to nonpolynomial problems. Finally, the paper demonstrates that the polynomial optimization approach extends to principal–agent models with multidimensional action sets.

]]>This paper considers nonstandard hypothesis testing problems that involve a nuisance parameter. We establish an upper bound on the weighted average power of all valid tests, and develop a numerical algorithm that determines a feasible test with power close to the bound. The approach is illustrated in six applications: inference about a linear regression coefficient when the sign of a control coefficient is known; small sample inference about the difference in means from two independent Gaussian samples from populations with potentially different variances; inference about the break date in structural break models with moderate break magnitude; predictability tests when the regressor is highly persistent; inference about an interval identified parameter; and inference about a linear regression coefficient when the necessity of a control is in doubt.

]]>It is well known that the finite-sample properties of tests of hypotheses on the co-integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett-type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic *χ*^{2} tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co-integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co-integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.