The climate science literature makes it clear that there are thresholds of greenhouse gas emissions, atmospheric concentrations, and global temperatures that should not be crossed, even if some residual uncertainty remains about the precise concentrations or temperatures at which these thresholds occur. The results of exceeding these thresholds are severe, far-reaching and potentially irreversible. This scientific analysis should be, and typically is in practice, a major input into climate policy choices. Yet it is not the only input that affects those choices.
Very often, the decisions of policymakers take into account both direct scientific predictions and the indirect predictions of climate-economics models, which provide economic interpretations of climate science. We have reviewed 30 climate-economics models, all of which have been utilized to make contributions to the integrated assessment model (IAM) literature within the last ten years.1 These models fall into five broad categories, with some overlap: welfare maximization, general equilibrium, partial equilibrium, simulation, and cost minimization (see Table 1).
Each of these structures has its own strengths and weaknesses, and each provides a different perspective on the decisions which are necessary for setting climate and development policy. Welfare optimization models maximize social welfare across all time periods by choosing how much emissions to abate in each time period, where abatement costs reduce economic production. General equilibrium models represent the economy as a set of linked economic sectors (labour, capital, energy, etc.); these models are solved by finding a set of prices that simultaneously satisfy demand and supply in every sector. Partial equilibrium models make use of a subset of the general equilibrium apparatus, focusing on a smaller number of economic sectors by holding prices in other sectors constant. Simulation models are based on off-line predictions about future emissions and climate conditions; a predetermined set of emissions values by period dictates the amount of carbon that can be used in production, and model output includes the cost of abatement and cost of damages. Cost minimization models are designed to identify the most cost-effective solution to a climate-economics model.
As a body of literature, these climate economics models suffer from some important limitations that impede their ability to offer accurate and impartial information to the climate policy debate. The first important set of limitations regards a disconnect between the conclusions of climate science and the conclusions of climate economics. Many scientists view climate change as an imminent threat requiring immediate, large-scale action, while many economists favour starting slowly and engaging in careful cost calculations in order to avoid doing too much about the problem. The second set of limitations involve assumptions made by climate-economics modelers regarding the shape and scale of future damages, the interactions between climate mitigation and damages, and employment and trade, and the importance of future generations' well-being.
2.1. The scientific literature and uncertainty
IAMs frequently rely on a damage function, estimating the monetary value of global damages at varying temperature levels. These are typically calibrated to low estimates of damages at moderate temperature increases; for instance, DICE assumes that less than 2% of world output is lost to climate damages at a temperature increase of 2.5°C above 1900 levels (Nordhaus 2008). In contrast to the findings of many IAMs, it is increasingly accepted by climate scientists that there are critical thresholds at which climate change may trigger abrupt, irreversible, large-scale damages. Unfortunately, there is no firm estimate of the temperatures or greenhouse gas concentrations at which these discontinuous events will occur. The four Intergovernmental Panel on Climate Change (IPCC) assessment reports to date have grown steadily more ominous in their discussion of risks of abrupt climate change. The 2007 report (IPCC, 2007, Chapter 19) projected that:
- • agricultural productivity in low latitudes, especially in Africa, will drop sharply with 2° of warming or less (measuring temperatures in degrees Celsius above 1980–1999);
- • agricultural productivity and economic output will drop everywhere above 3°;
- • extinction of species will become significant by 2° of warming, especially for coral reefs and arctic animals, and will become widespread by 4°;
- • the threshold for eventual loss of the entire Greenland ice sheet, ultimately causing 7 m of sea-level rise, is a sustained temperature increase of roughly 2–4.5°;
- • dangerous climate discontinuities, such as disruption of the North Atlantic meridional overturning circulation or the El Nino-Southern Oscillation (ENSO), become more likely as greenhouse gas concentrations increase, but the thresholds cannot yet be estimated;
- • regional catastrophes, such as increased intensity of storms and floods, and loss of fresh water from glacial snowmelt, occur at regionally varying temperatures and become steadily worse as temperatures rise.
The Stern Review, based on roughly the same information base (i.e., research available through 2006), identified two key global turning points. At 2–3°C rates of extinction rise, crop yields decline in developing counties, some tropical forests become unsustainable, and irreversible melting of the Greenland ice sheet is triggered. At 4–5°C risks increase significantly, including a decline in global food production (Stern, 2006). Based on a comparison of these impact thresholds with the costs of mitigation, Stern recommended a global target of remaining under 450–550 ppm CO2-equivalent (CO2-e). Anything lower, he suggests, is impossibly expensive. The higher limit implies a 24% chance of exceeding a temperature increase of 4°C and a 7% chance of more than 5°; the lower limit still allows a 3% chance of hitting 4° and a 1% chance of 5°. Lower temperature thresholds are much more likely to be breached: at 450 ppm there is a 78% chance of hitting 2° and a 18% chance of 3°; at 550 ppm there is a 99% chance of at least 2° and a 69% chance of 3° (Stern, 2006, Box 8.1, p. 195).2
The warnings from climate scientists, meanwhile, continue to grow more and more ominous. IPCC's 2007 report projected only modest sea-level rise, likely to be less than 1 m by 2100 – but this was based on excluding the uncertain (but non-zero) contribution of ice-sheet melting. Detailed research by Stefan Rahmstorf, published just after the IPCC deadline for the 2007 assessment, adjusts for estimated ice-sheet melting and suggests almost double the IPCC estimates for sea-level rise (Rahmstorf, 2007).
Most recently, a team of ten climate scientists led by James Hansen has published an analysis of paleoclimatic data, arguing that the equilibrium response to increased greenhouse gas concentrations is about twice as great as commonly believed; that is, the long-run climate sensitivity (defined as the eventual temperature increase in °C per doubling of atmospheric CO2) is 6, not 3 as both IPCC (2007) and Stern assumed. Hansen et al. project that a long-term CO2 concentration of 450 ppm or greater would lead to an ice-free Earth and many metres of sea level rise; they advocate a target of 350 ppm CO2, lower than today's 385 ppm, in order to stabilize ice sheets and major river flows, and reduce climate-caused extinctions (Hansen et al., 2008).
The climate science literature is grounded in the understanding that real and important uncertainties about climate outcomes cannot be well-represented by an average or most likely result: The 1% chance of reaching 5°C temperature change at 450–550 ppm CO2-e has a clear and direct relevance to policymaking. Climate-economics models inevitably rely on forecasts of future climate outcomes and the resulting economic damages, under conditions that are outside the range of human experience. Inescapable scientific uncertainties surrounding climate science, like the scale of the climate sensitivity parameter, are commonly represented in IAMs by an average or best-guess value. Climate science cannot rule out low-probability, enormous-cost climate outcome, but climate economics tends to focus on the milder, most likely outcomes.3
Even those IAMs that employ probability distributions to represent uncertain parameters may underestimate the worst-case risks. Climate research can only offer a limited number of empirical observations relevant to the estimation of key parameters. As a result, the probability distributions used in some IAMs often under-represent what Martin Weitzman (2007) has called the fat tails of the distribution – meaning that extreme outcomes are much more likely than a normal distribution would imply. According to Weitzman, IPCC (2007) data imply that an atmospheric concentration of 550 ppm of CO2-e would lead to a 98th percentile chance of 6°C increase in temperature (Weitzman, 2007, p.716).4
2.2. Questionable assumptions in climate economics
Climate policy, both in practice and in the analytical literature, often conflates several of the questions that climate-economic models attempt to answer: the need for global emissions reductions, for abatement measures in any one country, and for financial investments in abatement and adaptation – essentially, the when, where, and by how much of emissions abatement.
Welfare optimization models offer the most direct answers to the question, what emissions reductions are best for humanity? Other types of IAMs answer this question more obliquely: their results do not offer a policy recommendation but rather can be used to compare scenarios with better or worse emissions profiles and climate outcomes. Regardless of model type, however, there are several complex steps between a projection of future emissions and a recommendation of the best course of action. In order to provide counsel to policymakers, many assumptions are necessary regarding the meaning of human well-being and the scale of the threat that climate change poses to well-being.
2.2.1. Projecting future damages
In many common climate economics models, emissions scenarios are used to project the likely scale of economic damages and losses due to climate change. When infrastructure is destroyed and productivity is interrupted, the effect is slower projected economic growth and lower future output. The economic output in each time period in turn drives projected emissions. Efforts to reduce emissions and adapt to the worst impacts of climate change are costly.
In many IAMs, damages are assumed to rise in proportion to a power of the change in temperature – typically damages are assumed to be a quadratic function of temperature. Our review of the climate economics literature has revealed no empirical or theoretical basis justifying the widely used quadratic damage function.5 The Stern Review reports the results of the PAGE2002 (Hope, 2006) model, which uses an uncertain (Monte Carlo) parameter to represent the damage exponent, with minimum, most likely, and maximum values of 1.0, 1.3, and 3.0, respectively. Sensitivity analyses on PAGE2002 show that assuming damages are a cubic function of temperature – that is, fixing the exponent at 3 – increases annual damages by a remarkable 23% of world output (Dietz et al., 2007). The (equally arbitrary) assumption that damages are actually a cubic function of temperature, rather than quadratic, would have a large effect on IAM results and their policy implications.
Investment in abatement and adaptation to prevent climate damages is often modeled as a zero-sum-game, resulting in losses to investment in future production and further reductions in future output. In both industrialized and developing countries, however, investments in emissions abatement and climate impact adaptation, far from squeezing out other forms of investment, may have the potential to drive economic development. Emissions abatement may take the form of cutting edge electricity generation and distribution technology. Climate impact adaptation is often synonymous with improvements to infrastructure and protection from natural disasters.
An important but often overlooked impact of climate policy is the ancillary effect on other outcomes. Reduction of fossil fuel combustion reduces not only carbon dioxide, but also many other air pollutants, improving local air quality and health; inclusion of such ancillary benefits would lower the net cost of climate initiatives, making a more ambitious program of mitigation appear cost-effective (Aunan et al., 2007; Pearce, 2000).
2.2.2. Modelling trade and development
Economic modelling is applied to international trade and development, as well as climate policy. Computable general equilibrium (CGE) models are common in trade policy analysis, and play a role in climate economics modelling as well. CGE models incorporate interactions among all sectors of the economy, not just the ones of immediate interest; they reflect supply and demand balances, and resource and budget constraints, in all markets simultaneously. Their name suggests a link to one of the most imposingly abstract branches of economics, general equilibrium theory, although in practice applied modellers do not use much of the theory beyond the idea that all markets clear at once (that is, that demand equals supply in all markets).6
The comprehensiveness of coverage of the economy is the good news about CGE models: they offer a systematic framework for analyzing price and quantity interactions in all markets, ensuring that both direct and indirect effects are counted, while none are double counted. The bad news about the models also stems from their comprehensiveness: in order to provide such complete coverage of the economy, they rely on debatable theoretical simplifications, and impose enormous information requirements (Ackerman and Gallagher, 2004; 2008).
Any modelling exercise involves simplification of reality. The question is not whether simplifications are involved, but whether those simplifications clarify or distort the underlying reality. Unfortunately, CGE model structures and assumptions introduce major, unintended distortions into the results. In order to ensure that, as prescribed by economic theory, all markets always clear (that is, supply equals demand), CGE models apply an artificial, unrealistic procedure for modelling international trade, and eliminate unemployment and no-regrets emission reductions by arbitrary fiat.
Following a procedure developed by Paul Armington (1969), global CGE models estimate international trade flows by using a set of elasticities to apportion a country's demand for a specific good (such as US demand for paper) between domestic production and imports, and then to distribute the demand for imports among countries that export that good. While considerable research effort has gone into estimation of Armington elasticities, there are substantial uncertainties and hence wide confidence intervals surrounding the estimates (Hertel et al., 2004).
For policymakers, one of the most important results of economic modelling is the forecast of employment impacts. Much of the political passion surrounding trade policy, or climate policy, reflects the hopes and fears about its effects on employment. Will lowered trade barriers, or new energy conservation and efficiency investments create jobs? Or will a flood of low-cost imports, or the high costs of clean energy investments, eliminate jobs? Most CGE models are silent by design on these fundamental, controversial questions.7
The general problem is that a fixed-employment model does not allow analysis of changes in employment. Each country's aggregate level of employment after a policy innovation is, by assumption, the same as the level before. Workers can and will change industries, but they are playing musical chairs with exactly enough chairs for everyone who had a seat before the music started.
The same logic of perfectly functioning markets has crucial implications for climate policy in another area: no-regrets options, i.e. opportunities to reduce emissions at zero or negative net cost, are assumed to be impossible. The standard CGE approach assumes that there are not and cannot be any no-regrets options; this raises the overall cost of mitigation compared to an analysis that acknowledges and measures zero and negative cost abatement technologies. Just as a few trade modelers have begun to experiment with variable-employment CGE models8, it should in theory be possible to construct CGE models that allow for no-regrets options for emission reduction.
2.2.3. Intergenerational equity
Most climate economic models pay little or no explicit attention to the problems of equity across time and space. In the area of intertemporal choice, most models have high discount rates that inflate the importance of the short-term costs of abatement relative to the long-term benefits of averted climate damage. The discount rate, in an analysis that derives originally from Ramsey (1928), is composed of two components: the rate of pure time preference, measuring the differential importance we place on future versus present generations, independent of economic growth; and a wealth-based component, depending on the rate of growth of real consumption, reflecting the diminishing marginal utility of income over time as society becomes richer.
Discount rates have a strong impact on climate model results. Analysts who assume a low discount rate, especially a zero or near-zero rate of pure time preference, have frequently found that active climate protection measures are well worth the price (among others, Ackerman and Finlayson, 2006; Cline, 1992; Stern, 2006). On the other hand, those who assume a high discount rate, especially a significantly positive rate of pure time preference, have frequently found that only modest climate protection measures are cost-effective in the near future (among others, Nordhaus, 2007c; Tol, 2008). The literature on discounting and intergenerational equity is extensive; for reviews, see Portney and Weyant (1999); Stern (2006, Chapter 2 and appendix); Ackerman et al., (2009); and Stanton et al., (2009).
Choices about the discount rate reflect value judgments made by modellers. Most obviously, the rate of pure time preference is a judgment about the value of present versus future generations. Indirectly, the other component of the discount rate has parallel implications: a larger wealth-based component reflects a greater emphasis on equity, assuming that an increase in income to a poorer person is more valuable than the same absolute increase in income to a richer person. But when combined with the common assumption that the world will grow richer over time, discounting then gives greater weight to earlier, poorer generations relative to later, wealthier generations. (Equity between regions of the world, in the present or at any moment in time, is intentionally excluded from many IAMs, even those that explicitly treat the regional distribution of impacts; see Section 3 of this article.)