Why the social cost of carbon will always be disputed

Any social cost of carbon (SCC) calculated from an integrated assessment model of global climate‐economy interactions will always be disputed. This is because a key model input–namely the valuation of centennial climate damage–is highly unknowable for fundamental reasons discussed here. Problems with damage valuation are highlighted by the implicit implausibility to climate scientists of a leading model's (centennial) damage function, and by strong criticisms of damage functions by many climate economists. The claim that statistical analyses of past weather impacts on local economies, combined with structural modeling of sectoral impacts, can significantly improve centennial damage valuation rests on untestable, far‐out‐of‐sample extrapolation. Testing centennial climate (natural) science projections is generally harder than testing predictions in astronomy, geology and other earth sciences, because of Earth's uniqueness, and the unprecedented degree of likely climate change; but stable underlying laws make climate modeling based on past observations meaningful. By contrast, the added complexity of human behavior means there are no quantitatively stable laws for modeling the value of centennial climate damage. I suggest that any carbon prices used to inform climate policies, be they carbon prices used as policy instruments, or complementary, non‐carbon‐price policies, should instead be based on marginal abatement costs, found by modeling low‐cost pathways to socially agreed, physical climate targets. A pathway approach to estimating carbon prices poses challenges to many economists, and is no panacea, but it avoids any illusion of optimality, and facilitates detailed analysis of sectoral policies.

dollars [to] several hundreds of dollars per tonne of carbon" (IPCC, 2014b, Box 3.1). Two key parameters, ethical disputes about which have always contributed greatly to this range (Grubb et al., 2014, pp. 24-27), are the rate of pure time discounting (e.g., Table 2 in Tol, 2009 shows its effect on SCCs), and the equity weights given to rich and poor world regions (e.g., Table 2 of Anthoff & Tol, 2013 shows their effects). But my focus here is on the high unknowability of a third model input, the (centennial) climate damage function. The standard economic formula for this is as a percentage loss of global GDP caused by the contemporaneous level of mean global warming (though whether to use market or purchasing power exchange rates in aggregating global GDP is itself an ethical choice). Disputes solely about such damage functions can readily result in a three-to fourfold range of SCCs (e.g., see Figures 4-5 of Ackerman & Stanton, 2012); though it is also highly questionable whether the standard formula includes all the main causes or effects of warming damage, as noted later. I now discuss three main problems with any damage function: the implicit implausibility to climate scientists of many IAMs' damage functions (illustrated next for DICE, a leading IAM); wide-ranging criticisms by many economists of the arbitrary guesses found in all damage functions; and the untestability, I will argue, of any "improvements" in damage functions made by future research.
2.2 | The implicit implausibility to climate scientists of a leading IAM's damage function Figures 1a and 1b illustrate the first problem. Global mean temperature change stayed within about a 1 C-wide band for the last 11,000 years (Marcott et al., 2013, Figure 1a), the Holocene period in which all human settlements have developed. Figure 1a shows only the last 2000 years, so as to display more clearly two temperature projections to 2400 from the current, 2016R version of DICE (Nordhaus, 2017), one of three leading IAMs used by the U.S. government for SCC estimation (IWG, 2016), whose damage function is between the other two IAMs' (Rose, Diaz, & Blanford, 2017, Figure 6). DICE's projected, centennial peak of "optimally controlled" warming of 4.1 C in 2165 would exceed any levels seen regularly for at least 10 million years; while peak warming of 7.2 C in 2270 on DICE's baseline path, which entails minimal emissions control, would exceed levels seen for about 40 million years (Zachos, Dickens, & Zeebe, 2008).
Warming rates, recent and projected, are also dramatically unprecedented. The 1.7 C/century rise during 1970(NOAA, 2016 was about 170 times the baseline cooling rate since 5000 BCE (Steffen et al., 2016), while DICE's projected optimal warming of 2.9 C from 2015 to 2115 (Figure 1a) would be 290 times, in keeping with the complex climate models surveyed by IPCC (IPCC, 2014b, Fig. SPM.5b). What is at issue here is that DICE projects such unprecedentedly high, fast warming to be optimal-according to its (standard formula) damage function, which assumes only about 4 and 11% damage to global GDP from 4.1 and 7.2 C warming, respectively. This small difference in damage from 3.1 C extra warming is the main reason why the drop in projected growth over 2015-2400 of consumption per person net of climate damage (DICE's measure of average well-being) in Figure 1b is small, from 53-fold growth on the optimal path to 40-fold growth on the baseline path.
When shown such projections, climate scientists typically express disbelief, derision, or dismay that a 2.9 C/century approach to 4.1 C peak warming could ever be regarded as optimal, or that 7.2 C could still allow 40-fold growth in wellbeing (Steffen, 2018). However, their published warnings about damage from high, fast warming are usually expressions of grave planetary concerns-for example, that "the next few decades offer a brief...opportunity to minimize large-scale and potentially catastrophic climate change that will extend longer than the entire history of human civilization thus far" (Clark et al., 2016); or that "Earth System may be approaching a planetary threshold that could lock in a continuing rapid pathway toward much hotter conditions...at a [threshold] temperature of 2.0 C above preindustrial... [whose] impacts...on human societies would likely be massive" -rather than explicit criticisms of any IAM damage function, in which they have no disciplinary expertise. Yet I contend that no one can have real "expertise" in a (centennial) damage function, so that scientists' concerns about planetary risks based on millions of years of climate data, which have influenced the UN's 2 C warming limit, should not be considered inherently less valid than economists' damage estimates based on half a century or so of climate and economic data.

| Some economists' criticisms of IAMs' climate damage functions
By contrast with climate scientists, a significant minority of climate economists have directly criticized leading IAMs' damage functions for severely underestimating climate damage. For example, Stern (2013, pp. 840-1) implicitly rejected 4 C warming being optimal, by quoting the above 10-million-year novelty and unprecedented speed of such warming, and suggesting a resulting "risk of vast movements of population" that "history indicates...could involve severe, widespread and extended conflict." Many writers, including leading IAM authors, have noted the many types of damage, like biodiversity loss and ocean acidification, and the danger of triggering irreversible "tipping elements" in the climate system (Kopp, Shwom, Wagner, & Yuan, 2016), that are omitted from or understated in such IAMs' damage functions (Heal, 2017 Ackerman, Stanton, & Bueno, 2010;Pindyck, 2017;Weitzman, 2012) criticize the commonest standard damage formula, where global GDP loss is a quadratic function of current T, the level of global warming, as lacking evidence for high warming. This matters for climate policy, since persistent uncertainties, notably in climate sensitivity (Freeman, Wagner, & Zeckhauser, 2015), mean that even with stringent global abatement that would limit expected peak warming to 2 C, there is still a substantial risk that warming will exceed 3 C (IPCC, 2014b, Table 3.1). Moreover, warming rates as well as levels may cause damage (Anthoff & Tol, 2013); and damage may well occur not just to GDP levels, but also to GDP growth rates (Moore & Diaz, 2015).
Modelers critical of the leading IAMs (e.g., Ackerman et al., 2010;Cai, Lenton, & Lontzek, 2016;Dietz & Stern, 2015;Lontzek, Cai, Judd, &Lenton, 2015 andWeitzman, 2012) have therefore made many alternative damage assumptions, including: much higher exponents on T; making various parameters probabilistic, sometimes with jumps to represent tipping elements; and warming that harms the capital stock or total factor productivity, as well as current GDP. Such variety of functional forms as well as parameters emphasizes rather than resolves the deep uncertainties about damage functions. For example, the inescapable problem facing the growing use of probabilistic parameters in both critical IAMs (all those just cited) and leading ones (e.g., Anthoff & Tol, 2013;Hope, 2013) is that "we don't know the correct probability distributions that should be applied to various parameters" (Pindyck, 2017, p. 103); so probabilistic modeling may just thicken Pindyck's "veneer of scientific respectability." I contend that in the (centennial) damage function used in any IAM-based SCC estimate, one can find-though sometimes only by diligent searching-some arbitrary guesses unsupported by empirical evidence. Box 1 gives a date-ordered selection of recent quotations supporting this contention, some of which go further and conclude that damage functions are to some extent "unknowable," though as noted earlier I prefer "highly unknowable." However, any enduring "unknowability" of damage functions is contested by many influential researchers, as discussed next.

| How much can research improve damage function estimates?
A dominant view among leading climate policy researchers is that much more research is needed to, and will, improve estimates of damage functions and hence SCCs (e.g., Burke et al., 2016;Dell, Jones, & Olken, 2014;Pizer, 2017;Revesz et al., 2014;Sterner, 2015, and notably NASEM, 2017, two detailed analyses of SCC methodologies which recommended greater transparency and consistency). Some authors are confident that the resulting SCCs can only be higher, while others are agnostic. Many authors highlight the potential of empirical (statistical) analyses of short run (e.g., annual or monthly), subglobal (e.g., state-or county-level), weather impacts on a wide range of macroeconomic variables. Notable recent empirical modeling by Burke, Hsiang, and Miguel (2015), Carleton and Hsiang (2016), and Hsiang et al. (2017) made impressive claims about being able to value damage from future, high warming by analyzing impacts data from about 1950 to 2010. For example, Hsiang et al. (2017, Figure 5A) estimated 95% confidence intervals for a quadratic U.S. damage function in 2080-2099 for warming up to 8 C, with about a 4-9% GDP loss interval from 6 C warming. NASEM (2017) and Diaz and Moore (2017) also stressed the future potential for improving damage functions using the sectoral analyses found in IAMs variously called "structural," "detailed-structure," or "detailed-process." These "attempt to model the structure of the global economy.  NASEM (2017, p. 65, p. 139). An example is Monier et al. (2018), who "argue for a shift toward the use of a self-consistent integrated modeling framework to assess climate impacts" and "demonstrate the capabilities of such a modeling framework by conducting a multi-sectoral assessment of climate impacts." But such capabilities lie in the future, and NASEM (2017, p. 40) noted that "none of these [structural] models has yet been used... to sum all potential climate change damages." Neither recent empirical nor structural modeling has so far had much influence on governmental estimates of SCCs-for example, none of the above were cited by IWG (2016)-but NASEM (2017, p. 3 and in several Recommendations) strongly recommended "draw[ing] on recent scientific literature relating to both empirical estimation and process-based [structural] modelling of damages." Such studies are therefore likely to grow rapidly in number and influence; and they could improve estimates of damages from climate change likely over the next decade or two, and thus help inform regional and sectoral adaptation policies. But as shown in Figure 2, the maximum year-to-year change in global mean surface temperatures during 1950-2010 was only~0.3 C. So how can any modeling based on the last 60 or so years of data derive confidence intervals for global, centennial damage, that is, damage from over 3 C of sustained, global warming, and hence improve SCCs that BOX 1

QUOTATIONS ON GUESSWORK AND UNKNOWABILITY IN CENTENNIAL DAMAGE FUNCTIONS
"There is essentially no relevant empirical research, and it is not clear whether there ever could be any... Our assumed distribution [for the damage function exponent] was selected purely for comparability with guesses made by other analysts" Ackerman et al. (2010Ackerman et al. ( , p. 1662. "...neither I nor anyone else has an objective basis for determining magnitudes of high-temperature damages" Weitzman (2012, p. 234).
"It is difficult to see how our knowledge of the economic impact of rising temperatures is likely to improve in the coming years. More than temperature change itself, economic impact may be in the realm of the 'unknowable'. ... IAM damage functions are completely made up, with no theoretical or empirical foundation" Pindyck (2013, pp. 869-870).
"Some aspects of climate change are simply unknowable, at the very least in the timescales necessary to be able to act and influence long-term climate outcomes" Convery and Wagner (2015, p. 313).
"During the historical record, climate has varied far too little for us to be able to estimate its economic consequences, let alone predict the consequences of a change way outside anything that has ever been seen by humans" Heal (2017, p. 1051).
inform emission abatement policies on behalf of distant generations? The answer: only by analyzing the impacts of the much greater changes in temperature, precipitation, etcetera observed over the much smaller time and space scales noted above, and then making assumptions such as: ... the marginal treatment comparability assumption, which requires that the effect of a marginal change in the distribution of weather (relative to expectation) is the same as the effect of an analogous marginal change in the climate (Hsiang et al., 2017, Supp. Mat. p. 9) Caveats can be found (e.g., by Burke, Hsiang & Miguel, 2015, p. 239;Carleton & Hsiang, 2016, p. 11) that effectively admit such assumptions are ultimately untestable: that the impacts on global human welfare of sustained, massively unprecedented levels and rates of climate change cannot be reliably projected by a far-out-of-sample extrapolation from short-run, regional and/or sectoral analyses. So such assumptions are in effect nonfalsifiable beliefs. But rather than dismiss them summarily as Pindyck (2017, footnote 3) does, let us now consider the damage function knowability problem in detail, and thus show why it will persist for decade after decade.

| WHY CENTENNIAL CLIMATE DAMAGE FUNCTIONS ARE HIGHLY UNKNOWABLE
Here, we illustrate three broad reasons why (global, centennial) damage functions, as used by IAMs to estimate SCCs, are highly unknowable, and will remain so for the foreseeable future. Estimating the damage to welfare from more heatwaves or hurricanes in 2050 or 2100 caused by an extra tonne of CO 2 emitted now is so unfathomable because normal scientific methods (highlighted by Funtowicz & Ravetz, 1993) cannot apply. First, there are no adequate comparators for testing damage functions at the necessary scale; second, there are no agreed, quantitatively stable laws underlying damage modeling; and third, slow Earth-system response times greatly limit climate damage learning (how much any damage observed decades from now can improve a damage function for likely warming in the century after that). We start by discussing prediction testing in astronomy and selected earth sciences, to show how projection testing is generally much harder for climate science and for damage functions. "Projection" rather than "prediction" testing is the appropriate term for the latter, since all calculations of centennial climate change and resulting damage are not probabilities, but possibilities "conditional on assumptions concerning, for example, future socio-economic and technological developments that may or may not be realized." (IPCC, 2014b, p. 126).

| Prediction testing in astronomy and selected earth sciences
A key feature of astronomy, geology, seismology, volcanology, and meteorology is that, for the questions they are expected to address, comparisons and hence useful prediction testing are often possible, albeit often by natural, uncontrollable experiments. This in turn depends on scientists in these fields not being expected to make predictions or projections about unprecedented changes in a century's time, and on the systems studied having stable underlying laws. For example, astronomy is not expected to determine if some intelligent life form on another planet is threatening its own centennial existence by limitless growth. For the questions that astronomy does address, it can potentially compare planets, stars, and galaxies with thousands of other planets, stars, and galaxies; and the laws of physics are believed, with good evidence, to be forever quantitatively stable. Predictions can sometimes be tested by natural experiment, such as the collision of two black holes observed in 2015 that confirmed Einstein's 1915 prediction of gravitational waves (Cho, 2016); and no one doubts that any modification by 2115 of these recently confirmed laws of general relativity will still explain all pre-2015 phenomena.
For the questions they choose to address which cannot be tested experimentally, geologists can compare one part of the Earth with other parts, and mining geologists continually use such comparisons to make testable predictions about mineral locations. Individual earthquakes and volcanic eruptions are very hard to predict, but their regional, decadal averages can be predicted, thanks to evidence-justified confidence in the science of plate tectonics. Meteorology generally asks questions over timescales short enough that it can usefully compare events of interest with many similar events, though once timescales stretch to weeks, weather complexity greatly reduces predictability. Meteorology is facing new, difficult questions because of climate change, such as hurricane intensities over unprecedentedly warm seas, but the underlying laws of atmospheric physics are testable and stable.

| Projection testing in climate science
Laboratory testability indeed means "the principles [laws] of fluid dynamics, thermodynamics, and radiation that lead to the primary results of global warming under increasing atmospheric carbon dioxide are common to all climate models" (Hargreaves & Annan, 2014). For example, both the radiative absorption spectrum of CO 2 and the latent heat of water vaporization, two key inputs to global climate models, can be laboratory tested with great accuracy. But the current global atmospheric concentration of CO 2 is less accurately known, because it entails global fieldwork. And centennial projections of climate variables such as temperature and rainfall (conditional on the emission or concentration scenarios assumed) cannot be usefully tested by future observation, because the Earth system is unique and exceptionally complex (Lenton, 2015), the timescale is indeed centennial, and the system's likely centennial state is dramatically unprecedented, as noted earlier.
The best one can do is build global climate models, and back-test them against instrumental, historical, and/or paleoclimatic records. But because the Earth system is so much more complex, both the knowability and stability of its useful laws of motion-not the basic physics, but long-run, emergent measures like equilibrium climate sensitivity (Cox, Huntingford, & Williamson, 2018)-are much lower than with astronomy or geology. There are uncertain tipping elements in Earth systems, where small changes in climate forcing may trigger large, irreversible, though in some cases very slow, shifts in climate equilibria (Lenton et al., 2008;Steffen et al., 2018). Climate modelers' confidence decreases as their centennial projections move from mean global CO 2 concentration, through mean temperatures, to more complex phenomena like the local chances of floods, droughts, heatwaves, or hurricanes. Nevertheless, consensus in the underlying methodologies is strong enough that modelers often combine independent projections into "democratic" model ensembles (Burke, Dykema, Lobell, Miguel, & Satyanath, 2015).
Another challenge is from equilibrium response times being in centuries or millennia: much longer for melting ice sheets, hence for sea-level rise, than for ocean and atmospheric temperatures (Robel, 2015). Hence, whatever may be learnt from testing climate science models in say 2050, humanity then will still have no comparators to test projections of unprecedented Earth system changes awaiting their descendants in 2150.

| Projection testing in climate social science
On moving from climate natural science to the social science of estimating damage functions, there is another big drop in knowability. A highly globalized population of 10-plus billion people is so unprecedented that historical evidence of severe climate-change damage on past societies (e.g., McMichael, 2012) cannot test any monetary estimate of future damage. The Earth system including people, each with complex brains, is vastly harder to model than the system without humanity that pure climate science models. Though important progress has been made in analyzing complex system dynamics (e.g., Havlin et al., 2012), such analysis falls far short of any consensus about the nature, or even existence, of quantitatively stable laws for humanity's responses to unprecedented, centennial climate change. As just one example of this crucial difference between the natural and social sciences of climate, the elasticities of demand for foodstuffs vary over time and space; and any elasticity estimates both are vastly less accurate than the radiation and evaporation parameters mentioned above, and assume trust in normally functioning markets. World food markets did not function normally in 2007-2008, when several governments responded to poor supply conditions by restricting rice and wheat exports, further exacerbating food price rises. Normally functioning markets cannot be guaranteed in, say, a catastrophic drought in 2050; and large, rapid food price rises may cause riots, conflict, and even social breakdown (Hsiang, Burke, & Miguel, 2013), so the welfare damage from centennial droughts is yet more uncertain.
And as evidence of ongoing scholarly controversies relevant to any (centennial) damage function, consider three disagreements among leading authorities: first, about the effects of geography on centennial economic development, such as (to simplify greatly) Diamond's (1997) view that geography (soils, climate, biota, location, etc.) has a crucial effect, versus Acemoglu and Robinson's (2012) view that institutions are far more important; second, the huge debate about economic drivers of centennial social inequality provoked by Piketty (2014); and third, Hsiang and Burke's (2014) finding of "strong support for a causal association between climatological changes and conflict," versus Buhaug et al.'s (2014) refutation of this finding. Moreover, the problem of extremely slow ice-sheet response (Robel, 2015) again limits what can be learnt from future climate damages, as it does for climate science.
So the claim that statistical and structural modeling of local or sectoral damage from short-run warming or other geophysical changes can be used to predict global damage from similar changes sustained globally over decades in the far future, cannot be directly tested on any useful timescale. There are no adequate comparators for testing damage functions for unprecedented, likely future warming combined with unprecedented globalization and population growth. For this reason alone, deep disputes among climate economists-defined more broadly than the contributors to official reviews like NASEM (2017)-about damage functions will remain unresolved for decades, maybe forever. So we next consider an alternative: carbon prices estimated without using the damage functions that all SCC models use as inputs.

| ALTERNATIVES TO SCC MODELING FOR ESTIMATING CARBON PRICES
Here, I recommend using marginal abatement costs estimated by "pathway models" as carbon prices to guide abatement policies, instead of using SCCs. Pathway models is my term for a diverse range of IAMs which find feasible, least-or low-cost pathways or scenarios that reach some socially agreed, physical climate target. The target most closely related to human welfare is maximum global warming, as in UN (2009UN ( , 2015; but many models focus on targets closer to climate policies, notably CO 2 -equivalent concentration (Clarke et al., 2014), zero net emissions (Fay et al., 2015;Matthews, Zickfeld, Knutti, & Allen, 2018;UN, 2015), and cumulative emissions (Matthews et al., 2018). Also, some pathway models compute cost-effective (least-abatement-cost) pathways, by assuming either globally uniform marginal abatement costs or more realistic, nonidealized scenarios; while other models focus more on technical and political feasibility than cost minimization (hence the inclusion of "low-cost" above); see for example, discussions in Clarke et al. (2014, p. 421) and Vogt-Schilb and Hallegatte (2017).
Lastly, there is much debate on how far carbon prices should be used as explicit "carbon pricing" instruments such as carbon taxation or trading, or should be used to guide complementary, non-pricing abatement policies such as mandating product standards and labels, lowering fossil fuel subsidies, supporting renewable energy, and making long-lived investments like public transport infrastructure. Most authors recognize the importance of both carbon pricing and carefully chosen, complementary policies, but the best balance is much debated. For example, Baranzini et al. (2017) argued strongly for uniform global carbon pricing, and noted that pricing proposals can highlight departures from cost-effectiveness caused by political lobbying; and complementary policies can undermine cost-effectiveness when applied under a carbon-trading cap (Lofgren, Burtraw, Wrake, & Malinovskaya, 2018). Other authors stress how market and government failures mean that for both economic efficiency and political acceptability, complementary policies are essential, and carbon prices should vary across countries and sectors (CPLC, 2017, Vogt-Schilb & Hallegatte, 2017, especially for long-lived infrastructure (Vogt-Schilb, Meunier, & Hallegatte, 2018).
My contention here is simpler: just that since pathway models contain no damage functions, using them to estimate carbon prices is preferable to an SCC approach, given the high unknowability of damage functions. Also, with no damage functions, pathway modeling can in practice put more effort into analyzing individual emission sectors. Thus, Luderer et al. (2012), Knopf et al. (2013), Bertram et al. (2015), Bataille et al. (2016), andKriegler et al. (2018) all address the key abatement issue of how best to decarbonize the electricity sector, while the SCC models cited earlier do not. Also, if a regional/national target is set in terms of emissions, pathway modeling can be done for just that region/nation, again leaving more effort available for local details.
Pathway modeling of carbon prices is no panacea, though. Important, deep uncertainties remain about climate sensitivity (Freeman et al., 2015), which affects any maximum warming target; and about very-long-run abatement costs (e.g., Weitzel, 2017), although prices in policy-created carbon markets can give some guidance (Dietz & Fankhauser, 2010), and abatement costs can adjust to new information much more quickly than damage costs can (IPCC, 2014b, Section 3.2). Key debates remain about how to finance the unequal distribution of total abatement costs across rich and poor countries that most low-cost pathways entail (Clarke et al., 2014, Section 6.3.6.6). And since pathway models have no impact/damage sectors, they give no guidance for climate adaptation policies; while separate regional/national models may be collectively inconsistent, by ignoring trade effects (e.g., Bataille et al., 2016, p. S20).
Also, as the unavoidable counterpart to the high unknowability of climate damage functions in SCC IAMs, physical climate targets for pathway modeling, whether for temperature, concentrations, or emissions, are hard to agree on (e.g., see Hallegatte et al., 2016 for a risk-based analysis). Carbon prices and any other policies on a pathway to a physical target cannot be proved to be economically optimal, precisely because of damage unknowability; but neither can any SCC, for the same reason. Aiming for a lower warming limit must take into account its extra abatement costs, while also recognizing that abatement costs being more reversible than damage costs supports a precautionary approach (Hallegatte et al., 2016, p. 667; see also IPCC, 2014b, SPM 3.2). Despite all these complexities, and the weakness of the resulting abatement actions , global agreement has been reached on the 2 C guardrail (UN, 2009, 2015), but not on any damage function for use in an SCC.
On balance, I recommend using pathway IAMs to estimate carbon prices for guiding abatement policies, both to avoid guesstimating the centennial damage function needed for any SCC, and any illusion of optimality resulting from that; and because pathway modeling in practice often analyses important emission sectors like electricity. Although my recommendation is somewhat academic, because the extensive SCC literature has had little global impact on climate policy making, with very few mentions in IPCC (2014a, 2014b), it is finally worth considering why this recommendation may well be problematic for many economists.

| WHY THIS CHALLENGE TO THE SCC MAY BE PROBLEMATIC FOR MANY ECONOMISTS
Why, given the deep difficulties of climate damage valuation discussed above, do many economists still calculate SCCs and seek to improve them? The most obvious reason is that the U.S. government requires them; and the UK government also required SCCs during 2002-2009, but then changed to valuing emission changes according to abatement costs or emissionstrading prices (Rose, 2012). U.S. Executive Order 12866 (Clinton, 1993) mandates that "...in choosing among alternative regulatory approaches, agencies should select those approaches that maximize net benefits"; Order 12866 is in the title of the U.S. interagency determination of SCCs (IWG, 2010;IWG, 2016); and it is cited often (e.g., Pizer, 2017). Yet it also states that "costs and benefits shall be understood to include...qualitative measures of costs and benefits that are difficult to quantify, but nevertheless essential to consider." So the U.S. government could conceivably, though improbably, give a "reasoned determination that the benefits...justify [the] costs" of setting a precautionary, physical climate target.
Another reason is that abandoning the SCC's damage-valuation approach may be seen as abandoning a core professional purpose of climate economics. Some policymakers and campaigners may also fear that emissions abatement will be harder to defend in political debate if its benefits are not valued monetarily, however contentiously. More speculatively, many economists and some of the policymakers they advise may be psychologically attached to believing that SCC estimation can be significantly improved. At a deep level, humans' understanding of their very sense of existence is perhaps shaken by centennial climate change (Boulton, 2016); and it is probably less scary to believe that damage valuation estimates can usefully advise us, than to accept how deeply uncertain we must remain about the dangers of the one-off, uncontrolled experiment that humanity is conducting on Earth.
6 | CONCLUSIONS I have argued above that a damage function which monetarily values the impacts of centennial climate change, a function used by any integrated assessment (global climate-economy) model (IAM) when calculating a social cost of carbon (SCC), is so fundamentally uncertain or "highly unknowable" that there will always be deep disputes about SCCs. One cause of high damage unknowability is incomparability, because of the uniqueness of the Earth system combined with the dramatically unprecedented level and speed of likely centennial global warming. By contrast, examples from non-experimental sciences such as astronomy and geology showed how these sciences can find comparators for the questions they tackle. Another key cause is the huge complexity of the Earth system, including humanity. This means there are no agreed, quantitatively stable laws to use in modeling centennial climate damage, in contrast to centennial climate science, though the complexity of the Earth system ignoring humanity still poses daunting challenges to climate science modelers. Statistical modeling of past weather effects on national or regional economies, and structural economic modeling of damages, are both advancing and should help adaptation policies. However, claims that such modeling will significantly improve global, centennial climate damage functions rest on untestable beliefs that such methodologies can be extrapolated far out of sample. The values of risks to humanity posed by mid-century hurricanes, heatwaves, and the like remain essentially unquantifiable.
So instead of basing the carbon prices that guide various climate policies on SCC estimates, there is a case for basing them on the marginal abatement costs, calculated using other IAMs, on low-cost pathways that achieve socially agreed, physical climate targets. Such costs are still highly uncertain; and in addition to the obvious difficulty of agreeing on physical targets, downplaying the SCC's damage-valuation approach to estimating carbon prices may pose political, professional, and psychological challenges to many economists. Nevertheless, an abatement-cost, pathway approach avoids any possible illusion of centennial optimality, and is more consistent with the target-based nature of most climate policies.