History of the 2°C climate target

Authors


Abstract

Policymakers, scientists, and social scientists have debated a wide array of responses to the realities and prospects of anthropogenic climate change. The focus of this review is on the 2°C temperature target, described as the maximum allowable warming to avoid dangerous anthropogenic interference in the climate. The temperature target has its roots in the ways in which scientists and economists developed heuristics from the 1970s to guide understanding and policy decision making about climate change. It draws from integrated assessment modeling, the ‘traffic light’ system of managing climate risks and a policy response guided as much by considerations of tolerability of different degrees of climate change as by simply reducing carbon dioxide emissions. The European Union (EU) proposed 2°C as the policy target in 1996, with support from some environmentalists and scientists. It was subsequently listed as the desirable temperature target in the 2009 Copenhagen Accord. Yet the figure has a range of critics from scientific experts to economists arguing that the target is infeasible, expensive, and an inappropriate way of framing climate policy. Tracing the historical development of the target helps understand the context it emerged from and its various strengths and weaknesses. © 2010 John Wiley & Sons, Ltd.

For further resources related to this article, please visit the WIREs website

Framing a response to climate change represents a critical policy challenge in the 21st century. Whether considering policies to mitigate or adapt to climate change, questions of cost, efficacy, and ethics are important considerations. For policymakers interested in cost-effective mitigation of greenhouse gas (GHG) emissions, it is vital that the costs of mitigation are not disproportionate to the costs of avoided damages. One way in which this is envisioned is to consider the costs and ecological consequences of various temperature increases to try and establish at what points there will be ‘tolerable’ climate change as opposed to ‘dangerous anthropogenic interference’ (DAI). From these various estimates, modeled responses, and cost-benefit analyses, it is possible to identify temperature ranges that can be used to guide emissions policies. Although the scientific uncertainties make this translation from temperature to emissions a hazardous endeavor, it has not stopped temperature targets from becoming a proposed option for policymakers supported by some scientists and environmentalists.

In 1996 and again in 2005, the EU pronounced that policies to mitigate anthropogenic climate change should restrict the global mean temperature rise to 2°C above preindustrial temperature. The 2009 Copenhagen Accord likewise concurred. The 2°C target is said to be justified by scientific analyses and various estimates of future climatic damages. There are precursors to the 2°C target including a history of research on the climate sensitivity to a doubling of CO2 concentrations in the atmosphere and a series of proposed targets emerging from European researchers and organizations in the late 1980s. This paper outlines the historical development of the 2°C target from these precursors to its present adoption as an EU policy and cornerstone of many environmentalist organization's claims. As temperature limits become targets for policymakers, a number of questions are raised about the ethics and costs of achieving these targets and the consequences for those affected by the intended climatic changes.

CLIMATE SENSITIVITY AND TEMPERATURE TARGETS

Before turning to the historical development of temperature targets, it is important to understand how the scientific analysis of climate change afforded the possibility of even considering climate policies in these terms. Effectively climate scientists modeled two important aspects of the climate system. First, the relationship between CO2 emissions and CO2 concentrations in the atmosphere with an understanding of carbon cycle and other dynamics, and second between CO2 concentrations and temperatures attempting to understand how sensitive the climate was to increased concentration levels. The latter concept became articulated as climate sensitivity. Climate sensitivity is the equilibrium temperature response to a doubling of CO2 concentrations in the atmosphere compared to preindustrial levels. If climate sensitivity is 2°C, then this means that doubled concentrations (which are generally considered to be approximately 550 ppmv) would lead to a 2°C equilibrium temperature increase.1

The calculation of global mean temperature response to a doubling of CO2 concentrations dates back to Arrhenius' geophysical work at the end of the 19th century.2 This heuristic became widely adopted in 20th century science with the work of, for example, Manabe and Wetherald who used a heat balance model in 19673 and estimated a temperature response to doubling CO2 concentrations of approximately 2°C. With the limitations in computer capacity and the need to test the reliability of models, doubling CO2 concentration scenarios became the mainstay of climate change scientific research. Manabe and Wetherald4 in 1975 used a pioneering three-dimensional general circulation model to test a doubling scenario and found that global temperature increase averaged 3°C in the model outputs. Schneider found a range of 1.5–3 K ( ± 4).5 By the time of the publication of the Charney Report in 1979,6 the climate sensitivity to a doubling of CO2 concentrations was estimated to be between 1.5 and 4.5°C temperature increase, a range that has been upheld by subsequent IPCC reports even if the top of this range remains unconstrained.

The scientific debates were supplemented by economic and policy analyses. As early as 1977, Nordhaus had reasoned that one could perform a cost-benefit analysis of climate change using the doubling of CO2 concentrations as the heuristic scenario.7 Subsequently, economic analyses were frequently posed in these terms. They would assess the cost-benefit of various policies to manage climatic responses with the results dependent upon the chosen climate sensitivity.8,9 Although Nordhaus can be argued to be the source of targets such as doubled CO2 or 2°C within the economics literature (e.g., as Oppenheimer and Petsonk10 suggest), his work used these figures as heuristics rather than normative policy guidance. Nordhaus subsequently stated that a 2°C temperature target, or restriction at a doubling of CO2 concentration, would be far too costly to adopt,11 although his 2008 book gives some support to temperature targets again.12

Although research that linked emissions to concentrations to temperature provided useful insights, policy framing in the early 1980s recommended awaiting scientific evidence or adopting prudent emissions cuts. There was little to specifically suggest how much. Nevertheless Bach, for example, called for ‘[a] broad systems approach… to help define some ‘threshold’ value of CO2-induced climate change beyond which there would likely be a major disruption of the economic, social and political fabric of certain societies…. An assessment of such a critical CO2-level ahead of time could help to define those climatic changes, which would be acceptable and those that should be averted if possible’ (p. 4).13 In other words, he started to define the problem of climate policy as one of assessing the risks of climate change rather than addressing carbon emissions as an isolated pollutant. This kind of reframing is also represented in the conclusions of the 1983 National Academy report ‘Changing Climate’.14 This is important because it reframes policy advice from being focused on emission reductions to being focused upon defining climatic changes that are tolerable and changes that are disruptive. Finding the acceptable level of climate change became a viable research agenda.

DEVELOPING CLIMATE POLICY TARGETS IN THE 1980s AND 1990s

The scientific debates about climate change, however, did not prescribe or help policymakers prescribe a definitive answer to the question of how much climate change would be tolerable. During the late 1980s and into the 1990s the policy terrain increasingly shifted to an arena in which climate targets were actively discussed and debated. Drawing upon the experience of the Vienna Convention the Director of the United Nations Environment Programme (UNEP), Mostafa Tolba, wished to establish a matching climate convention.15 Indeed commentators have noted the striking similarities between climate and other atmospheric pollutant policies (ozone and acid rain) in their political arrangements and, most importantly for this review, in their policy focus upon ‘critical loads’.16 Climate change proved more politically challenging, however, and confusion often reigned about exactly what was to be stabilized in that load: emissions, concentrations, or temperature; equilibrium change or rate of change.10 Indeed, researchers discussed all of these targets during the late 1980s and to clarify the way in which temperature targets emerged it is useful to outline the options that were presented during this period.

Although the discussions over the extent of precautionary action on climate change lingered on, researchers examined the potential consequences of delaying action to reduce emissions. The policy analysis of Seidel and Keyes in 1983 concluded that the changes in energy policies would not significantly affect the date the planet was committed to a 2°C temperature rise,17 a conclusion that Mintzer's work in the late 1980s challenged. Mintzer, a senior research associate at the World Resources Institute, calculated the difference in timescale of climatic effects in response to various scenarios to reduce emissions;18 work that was later discussed in a Commonwealth report as a useful way of thinking about climate stabilization policy.19 His conclusion was that energy policy could affect the timing and magnitude of future climatic change and called for policies to prevent (undefined) ‘intolerable levels of global warming’ rather than ‘wait and see’ (p. 43).20 Likewise the Summary for Policymakers from the 1990 IPCC First Assessment Report (FAR) Working Group 1 stated that ‘we calculate with confidence that… immediate reductions in emissions from human activities of over 60% [are needed] to stabilize concentrations at today's levels…’ (p. 1).20 Instead of proposing specific temperature targets to work back from, these reports examined potential future scenarios with reduced emissions and their likely climatic outcomes, taking into account uncertainties about climate sensitivity and economic change. This work, however, did not explicitly draw policy conclusions about the maximum level of warming; it was focused on controlling emissions rather than the global thermostat. Other discussions at the time, particularly in Europe, were more forthright about the limits within which climate change should be constrained.

Two targets materialized in debates in Europe in 1987. Both the 2°C above preindustrial temperature target and a 0.1°C per decade maximum tolerable limit emerged from discussions at the Bellagio workshop in 1987, a workshop supported by the Advisory Group on Greenhouse Gases (AGGG—a group established by the UNEP, World Meteorological Organization and International Council of Scientific Unions) and behind them the Stockholm Environment Institute (SEI).21 Legend has it that the targets emerged from a dinner conversation about unpublished work on plant species on the shores of a North American Lake, which was subsequently reported in a plenary conference session.22 Although the AGGG failed to make the kind of policy impact that the IPCC was later able to do,21 there is an enduring legacy of this approach to climate risk management. Reports in the early 1990s by Risjberman, Swart, and Vellinga among others established a target-based approach for climate policy. These authors were responsible for creating the so-called traffic light system for climate risk management. The system had three color codes: green, amber, and red. Green represented limited damage and risk, and implied a temperature rise of less than 0.1°C per decade, and a sea level rise of less than 0.02 m per decade. Amber represented extensive damage and risk of instabilities, and would be a temperature rise of between 0.1 and 0.2°C per decade and a sea level rise between 0.02 and 0.05 m per decade. Red represented significant disruption to society and possibly rapid non-linear responses, and would occur with temperature rises of above 0.2°C per decade and sea level rises above 0.05 m per decade.23 What is interesting is that the boundary between green and amber was also estimated to be up to a maximum temperature increase of 1°C, while the boundary between amber and red was delineated at 2°C. Rijsberman, Swart, and Vellinga's work23,24 concluded that temperature rise should be kept below 2°C above preindustrial temperature—indeed these papers make clear that their proposal is to the international community and for a global strategy. This work appears to be the first substantive suggestion that maximum temperature change (along with rate of change) should be defined as a risk-based target for climate policy. The rationale for the rate of change target was that of the adaptability of ecosystems to climatic change, whereas in the case of the temperature target it was the boundary between changes that could be accommodated without serious or costly implications or, importantly given the subsequent focus on ‘tipping points’, potential nonlinear biophysical responses.25

Other reports at the time drew similar conclusions often referencing the Bellagio workshop in their work. For example, the International Project for Sustainable Energy Paths (IPSEP) report, drafted in 1988 and published in 1989, defines two forms of targets for climate policy: a rate of change of 0.1°C per decade and a maximum CO2 concentration of 400 ppmv which would give a change between 1 and 3°C above preindustrial temperatures.26 This, they suggested, should supplant policy advice based on doubled CO2 concentrations. The 1988 Toronto Conference and the November 1989 Noordwijk Ministerial Conference recommended long-term goals for emissions stabilization and that climate change should be kept within the natural capacity of the planet (noting the rate of change target).27,28 The Organization for Economic Co-operation and Development (OECD) report in 1992 also considered reducing emissions by 60% to stabilize the concentrations of GHGs in the atmosphere (the IPCC FAR advice) as well as the rate of change and temperature targets.29 It concluded that all these targets would be costly. The problem with emissions targets was that their economic justification was unclear (how many should they be reduced by given the cost of cutting them), while concentration or temperature targets depended on constraining the uncertainties in the models (which did not look like a quickly solvable activity). The work of the influential Scientific Advisory Council on Global Environmental Change for the German Federal Government (WBGU) furthered the case for temperature targets by arguing that climate change should be seen as a problem of control. The global environment must not exceed the fluctuation in the present geological epoch ± 0.5°C (a ‘tolerable window’ of 9.9–16.6°C that left the planet with 1.3°C warming before crossing the upper maximum, recognizable as close to 2°C) or excessive cost defined as the adaptation cost of a more than 0.2°C per decade temperature rise (5% of GDP).30 As Tol points out, the figures make significant assumptions about ecological impacts and costs, respectively.22

Temperature targets became acceptable during the early to mid-1990s with a scientific and political convergence on defining unacceptable risks or costs. This was solidified in policy advice, of which the most significant expression since Noordwijk, albeit without precise numbers, was Article 2 of the United Nations Framework Convention on Climate Change (UNFCCC) in 1992. ‘The ultimate objective of this Convention… is to achieve… stabilisation of greenhouse gas concentrations in the atmosphere at a level that would prevent DAI with the climate system.’31 As numerous commentators have pointed out, the definition turns upon the definition of the term ‘dangerous’.32,33 Defining the threshold in the damage function when costs rise rapidly could be a useful proxy for excessive anthropogenic interference. Indeed the Second Assessment Report (SAR) of the IPCC in 1995 drew explicit assessments of the damages resulting from various changed climates.34,35 Integrated assessment developed significantly during the early 1990s especially as a form of quantitative modeling.36 Commentators such as Swart and Vellinga argued for a risk-based analysis of ecological vulnerability to assess regional, and ultimately global, critical levels.37 Targets were thus about defining acceptable levels of damage rather than the avoidance of all damage. Economic models dominated the social science36 and this provided a fertile ground for exploring cost-effectiveness analysis.38 This provided more scientific advice to be incorporated into economic approaches such that they were less reliant on pure optimal analyses. Climate change must be halted before it becomes dangerous or too costly and thus policy action must ensure that this target is not exceeded. The two targets just described, rate of change or maximum change, were the obvious candidates and both had support within the UNFCCC. As emissions reduction targets remained on the political agenda, the rate of change target declined in prominence (although it was often given as an additional weighting factor in any discussion of targets39), while the temperature target of 2°C emerged strongly in the run up to the EU's 1996 declaration. The rate of change target disappeared largely it seems because emerging studies proved that natural variability was higher than this (see Ref 22). It is interesting to note that US politicians and scientists were much more reticent than Europeans in the discussion of targets. One reason for this may be highlighted by the development of the IPCC, an organization advocated by US government agencies (amidst internal disagreements), as a means to prevent a, perceived, politically activist AGGG from advancing an agenda ahead of scientific and governmental review of climate change science.15 Unlike the debates in Europe, the IPCC specifically stayed away from endorsing targets for political action.40

THE 2°C TARGET AND ITS CRITICS

By 1996, the EU commissioners focused upon not just a target for keeping 2000 emission levels to 1990 levels, but also to working toward a maximum allowable temperature target of 2°C.41 The EU target drew inspiration from being toward the lower end of the mid-range IPCC emissions scenario in the SAR in 1995. This was interpreted to be a 2°C temperature rise by 2100,41,42 and as the point beyond which climatic ‘dangers’ would become more visible.43 Indeed that IPCC report, and reiterated again in 2001, suggested that 2°C may represent a boundary beyond which there would be risks to many unique threatened ecosystems and a large increase in the number of extreme weather events.44,45 The magnitude of risks becomes even higher after 4°C. While integrated assessment modeling provided possibilities for suggesting ‘danger’ points, they did not uniformly suggest a specific target. Smith et al.'s IPCC chapter44 highlights no one specific target though by their distinction of small (under 2°C), medium (2–3°C), and large (over 3°C) temperature increases they leave an impression that any policy would be framed within this. Tol's review of the literature22 has suggested that there is little explicit scientific evidence for why 2°C should be the preferred target.

Despite ongoing scientific debates, the target became a political anchor for mitigation policy. In the European Parliament's restatement of the 2°C target in 2005, they concluded that it was scientifically justified and that it was vital to promote cost-effective action to ensure temperatures did not rise beyond this point.46 Likewise the British chaired G8 meeting in 2005 reaffirmed the 2°C target.45 These statements are not legally enforceable,22 but there are movements to make this target the subject of a global treaty. In 2009, the Copenhagen Accord, at the United Nations Climate Change Conference of the Parties (COP-15), proclaimed the 2°C figure as being scientifically justified, with ‘deep cuts’ in global GHG emissions required.47 ‘Action to meet this objective’ should be ‘consistent with science and on the basis of equity’ (Principle 2 of the Copenhagen Accord).47 A future international climate accord may make 2°C a legal goal. These political pronouncements about 2°C, combined with the scientific discussions, gave perceived if not direct support for this target as it was taken up in environmentalist literature from the early 1990s including by Friends of the Earth and Greenpeace.48,49 Subsequently the 2°C target has been widely embraced as, for example, expressed in the ‘Two Degrees of Separation’ report,50 the Stop Climate Chaos Coalition's choice of 2°C as their target51 and various popular books affirming this target.52,53 Once a concentration or temperature target has been agreed, it is easier for policymakers, environmentalists, and economists to start to work out how to achieve that target whether through the pursuit of various mitigation solutions54 or per capita distribution processes such as contraction and convergence55 or carbon rations.52 A target, whether perceived as scientific or not, at least offered an interface between the risks scientists were discussing and a political arena in which how to deliver low-carbon societies was being actively debated. Two degree centigrade represented a useful ‘boundary object’ interfacing between science, social science, and policymakers (the term boundary object comes from Ref 56).

Although the 2°C target has been embraced in many quarters, there has been criticism over the choice of target and the logic of temperature targets more generally. For some economists, the 2°C target has been criticized for its weak justification in cost-benefit analysis studies of climate policy.22 Temperature stabilization appears overwhelmingly costly when a moderate discount rate is used. Recent analyses such as the Stern Review57 technically move away from specifically endorsing a 2°C target, instead supporting 550 ppmv though this has not stopped significant discussion about the discount rate chosen as well as the concentration target.12,58 Scientists too have been skeptical of the value of the 2°C with Hansen59 suggesting that it is not a responsible target as it already commits the world to significant climate change. Likewise, there are problems with any temperature target in that it must take into account the significant uncertainties in climate sensitivity.60 Interestingly, the EU Council of Ministers adopted ‘lower than 550 ppmv’ as the concentration target that should be used to guide mitigation for 2°C in 1996 (cited in Ref 41). By 2005, this had been altered to well below 550 ppmv.46 The EU target, in its original formulation as equivalent to 550 ppmv, is shown by Meinshausen,1 in an examination of 11 sensitivity probability density functions, to have between a 63 and 99% (mean 82%) chance of exceeding 2°C. Even a 450 ppmv target has a 26–78% (mean 54%) chance of exceeding 2°C. This variance highlights the risks of selecting any specific concentration target from a temperature target. This makes long-term temperature targets problematic mechanisms if they are interpreted to provide specific advice on emissions pathways.61,62 The EMF 22 International Scenarios, which involve 10 integrated assessment models, examine the practical questions of how mitigation policy might proceed now within the context of long-term temperature or concentration targets.63 This work suggests that to meet a concentration target of 450 ppmv for 2100 (and even 550 ppmv), there would need to be swift, decisive emissions reductions at an international level.63 The magnitude of the challenge looms large. Thus, there has been significant contention about the importance and substance of establishing temperature targets and whether scientists, economists, or policymakers should play the leading role in setting them.

CONCLUSION

The 2°C temperature target has become a critical part of the EU, UN Conference of Parties, and other organizations policy proposals. The target has historical resonance with being initially related to the temperature output likely to be produced by a doubling of CO2 concentrations in the atmosphere, through to its adoption as the limit past which the climate would become ‘dangerous.’ It offers an expected goal, a deliverable climate for the pains of decarbonization policies, which is useful in the context of justifying costly decisions. The target also focuses attention on the question of future risks and which ones are bearable, whereas at the same time encouraging debate about maximum sustainable use of the atmospheric resource. Discussed and disputed by scientists and social scientists alike, the adoption of a 2°C target from a set of heuristics is not without problems. First, scientists have doubted the efficacy of the target on the basis of the difficulties of establishing such a target in the context of so many uncertainties about climate sensitivity.1 Secondly, economists have critiqued it for being too costly and based on insufficiently clear damage costs.22 Thirdly, researchers interested in science-policy dynamics have critiqued it for forcing a rather tenuous policy debate that has detracted from the process of reducing emissions.61 The 2°C target has been such a focus for environmentalists and policymakers that now it is sanctioned within the Copenhagen Accord it will likely play a major role in future climate change negotiations, even if some scientists, economists, and other researchers have doubts about its suitability or practicality.

What will happen if the temperature target is exceeded in the future? Will confidence in the ability to control climate change decline paving the way for geoengineering solutions or serious political questioning of the wisdom of scientists? More concretely if policies are enacted to get the planet to this temperature target, who will bear the costs of adapting to the new climate?61 In the contested domain of climate change policy, the 2°C temperature target represents one particular approach to the problem. Unpacking the historical development of this concept provides an important insight into the ways in which climate policy has developed and the various strengths and weaknesses of this framing. Staking out visions of the future will continue to play an important role in imagining future climates and proportionate human responses. This is an interesting academic and policy terrain that can only expand in scope as international discussions on what to do about climate change proceed.

Ancillary