Online misinformation about climate change

Policymakers, scholars, and practitioners have all called attention to the issue of misinformation in the climate change debate. But what is climate change misinformation, who is involved, how does it spread, why does it matter, and what can be done about it? Climate change misinformation is closely linked to climate change skepticism, denial, and contrarianism. A network of actors are involved in financing, producing, and amplifying misinformation. Once in the public domain, characteristics of online social networks, such as homophily, polarization, and echo chambers—characteristics also found in climate change debate—provide fertile ground for misinformation to spread. Underlying belief systems and social norms, as well as psychological heuristics such as confirmation bias, are further factors which contribute to the spread of misinformation. A variety of ways to understand and address misinformation, from a diversity of disciplines, are discussed. These include educational, technological, regulatory, and psychological‐based approaches. No single approach addresses all concerns about misinformation, and all have limitations, necessitating an interdisciplinary approach to tackle this multifaceted issue. Key research gaps include understanding the diffusion of climate change misinformation on social media, and examining whether misinformation extends to climate alarmism, as well as climate denial. This article explores the concepts of misinformation and disinformation and defines disinformation to be a subset of misinformation. A diversity of disciplinary and interdisciplinary literature is reviewed to fully interrogate the concept of misinformation—and within this, disinformation—particularly as it pertains to climate change.

Before beginning a detailed discussion of online misinformation about climate change, it is necessary to examine what is meant by "misinformation." The academic literature on misinformation often offers no definition of misinformation, or simply refers to a specific dictionary definition (Karlova & Fisher, 2012), however, there are numerous, and sometimes contradictory, definitions of misinformation in dictionaries, and reference materials. A common theme is that misinformation pertains to information that is false, inaccurate or misleading; note that to be misleading, the information itself need not be false, but may be presented out of context. Differences between definitions concern the intention behind misinformation. Some definitions hold that misinformation is intended to be spread and to deceive (Cambridge Dictionary, 2019;Oxford Dictionaries, 2019), while others allow it to be mistakenly created or spread, with no intention to deceive (John Hopkins Sheridan Libraries, 2019; University of Michigan Library, 2019; Wu et al., 2016). Some middle-ground definitions do not mention intentionality at all (Merriam Webster, 2019), or state that there may or may not be intention to deceive (Dictionary.com, 2019).
A closely related term is "disinformation." Definitions of this term mostly describe it as deliberately false information which is deliberately created and spread to mislead (Dictionary.com, 2019;Oxford Dictionaries, 2019), deceive (Cambridge Dictionary, 2019), or influence public opinion or obscure the truth (Merriam Webster, 2019;University of Michigan Library, 2019). For disinformation, the information itself may not be false, but may be accurate information deliberately presented in such a way as to be misleading (Fallis, 2009). Karlova and Fisher (2012) define both misinformation and disinformation as being informative, possibly true, possibly complete, and possibly current, with the one difference being that misinformation is not deceptive, while disinformation is. van der Linden (2017) makes the same distinction in terms of deception, defining misinformation as information that is false or incorrect, possibly as a result of human error, while disinformation is defined as false or incorrect information where there is clear intent to cause harm or purposefully deceive others. Lazer et al. (2018) similarly define misinformation as false or misleading information and disinformation as false information that is purposely spread to deceive people. However, in the context of online information and social media, the intention of an actor creating or sharing information can be hard to assess. Furthermore, information that is initially created or shared with intent to deceive may go on to be shared by others without intent to deceive, and vice versa. The full "journey" of this information is of interest to scholars and a useful definition of "misinformation" must pragmatically allow its identification without knowledge of intentions. Thus, in this article, the following definitions are used: "Misinformation is misleading information that is created and spread, regardless of whether there is intent to deceive. Disinformation is misleading information that is created and spread with intent to deceive.". Using these definitions, it can be seen that misinformation is a subset of information, and disinformation is in turn a subset of misinformation (Figure 1), and therefore studying misinformation by default includes disinformation.
Looking more specifically at climate change misinformation, there is very little research which explicitly uses the term "misinformation." However, there has been long-standing academic discussion of systematic attempts by various actors (see below) to discredit climate science and confuse political debate on climate change, by casting doubt on wellsupported theories and providing alternative and often fallacious interpretations of observations. A number of terms F I G U R E 1 Hierarchy of information, misinformation, and disinformation specific to climate change are used to describe these types of information and communicative behavior, which are misleading, and as such fit our definition of misinformation. Examples of this terminology are: "skeptical discourse" (Boussalis & Coan, 2018), "skeptical arguments," and a "denial campaign" (Elsasser & Dunlap, 2013), "attempted expert knowledge de-legitimization and contestation" and "reinterpretation of existing climate science knowledge claims" (Sharman, 2014), "contrarian messages/discourse" (Farrell, 2016a), "contrarian information" (Farrell, 2016b), and "skepticism and doubt" (Reed, 2016). Thus to an extent, the recent academic focus on misinformation can, for climate change, be seen as a re-framing of longer standing academic discourse on these topics; indeed, Farrell (2019) recently introduced the term misinformation in relation to a dataset used for two previous papers (Farrell, 2016a(Farrell, , 2016b, neither of which used the term misinformation. Similarly, Reed (2016) does not himself use the term misinformation, however, Pearce et al. (2019) cite Reed's paper as an example of literature centered on climate change misinformation.
It is clear that skepticism, contrarianism, and denial are concepts often associated with climate change misinformation. It should be noted that this is not skepticism in its original meaning as an integral part of the scientific method, but in its frequently applied usage to mean those who doubt climate change or reject mainstream climate science. O'Neill and Boykoff (2010) urge that care should be taken to ensure the terms are appropriately applied depending on subject, issue, context, and intervention as "continued indiscriminate use of the terms will further polarize views on climate change, reduce media coverage to tit-for-tat finger-pointing, and do little to advance the unsteady relationship among climate science, society, and policy". Despite this warning, the various terms continue to be used, as found by Bjornberg, Karlsson, Gilek, and Hansson (2017) in their review of the scientific literature on climate change denial, where all these terms are found, along with anti-science, doubt, and dismissal.
Note that misinformation is not associated with a particular ideological position in the climate debate. Alongside climate denial, "climate alarmism" could also be perceived as misinformation-indeed, one author draws parallels between climate change alarmism and the story of Chicken Little (Halliman, 2017), while Hulme (2006), then Director of the Tyndall Centre for Climate Change Research wrote an opinion piece published on the BBC News website in which he warned that "the discourse of catastrophe is in danger of tipping society onto a negative, depressive, and reactionary trajectory". However, the amount of literature examining climate change alarmism is negligible compared to that examining climate change skepticism/contrarianism/denial, and so for the purposes of this discussion, the focus is exclusively on the latter.

| WHO IS INVOLVED IN SPREADING MISINFORMATION ABOUT CLIMATE CHANGE?
In a review of the literature on climate change denial, Bjornberg et al. (2017) find six categories of actors and organizations that deny environmental science in general and climate science in particular. Four variants of climate science denial are identified: trend denial (no significant warming is taking place), attribution denial (it is not anthropogenic), impact denial (it will not have significant negative impact on humans or the environment), and consensus denial (there is no consensus among climate scientists about anthropogenic climate change). The six categories of actor, which are not mutually exclusive, are (a) scientists; (b) governments; (c) political and religious organizations including think tanks, foundations, and institutes; (d) industry, often oil or coal extraction, also steel, mining, and car industries; (e) media, particularly those with right-wing affiliations; (f) the public, particularly politically conservative white males. Dunlap and McCright (2011) find a similar set of actors in their analysis of the "climate change denial machine". The same authors talk of "forces of anti-reflexivity," particularly the industrial sector and conservative political movement, that deny the significance of climate change . Dunlap (2013) further writes of "an organized disinformation campaign waged by a loose coalition of industrial (especially fossil fuel) interests and conservative foundations and think tanks that utilize a range of front groups and astroturf operations, often assisted by a small number of "contrarian scientists", and aided by conservative media and politicians and more recently a bevy of sceptical bloggers." Similarly Schafer (2015) identifies four framings of climate change, with two of these ("Scientific Uncertainty" and "Economic Development") having misinformation as their central organizing idea. The main "sponsors" of these frames, providing monetary, personal, cultural, and/or symbolic resources, are identified as the fossil fuel, coal, automotive, and electric utility industries and their associations, think tanks, and conservative politicians, especially in the US. A subset of these actors was also found by Ding et al. (2011) who highlight the role of the Bush administration in stressing the scientific uncertainty around climate change along with the fossil-fuel industry, conservative think tanks, and political pundits. They also suggest that the way mainstream news media reports on climate change presents a "false balance", a concept first introduced by Boykoff & Boykoff (2004). Analysis in 2007 found that this false balance was no longer evident (Boykoff, 2007), and more recently Brüggemanna and Engesser (2017) investigated this further and found that climate journalists had moved on from the historic normal approach of ensuring balance, but that they still give substantial media attention to contrarians, particularly when a particular set of conditions is met: Contrarian authors, in a right-leaning media outlet, in a country with elite voices, and lobbyists who back the denial of climate change.
Some researchers focus on just one of these categories or key components of climate change denial, for instance Elgesem, Steskal, and Diakopoulos (2015) analyze climate change discourse in the blogosphere and find that blogs are a crucial outlet for climate change skeptics. Similarly, Sharman (2014) uses social network analysis to examine the climate skeptical blogosphere and finds the central blogs in the network are key protagonists delegitimizing and contesting expert knowledge and reinterpreting existing climate science knowledge claims, while Matthews (2015) investigates one particular blog post where climate skeptics give reasons for their skepticism, and finds that blogs are influential on both sides of the debate. Boussalis and Coan (2018) undertake a systematic overview of misinformation spread by conservative think-tanks in the form of skeptical discourse and argue that these think-tanks play a central role in climate change skepticism. Elsasser and Dunlap (2013) state that the conservative "echo chamber" is a crucial element of what they term "the climate change denial machine", and investigate this through an analysis of 203 opinion editorials ("op-eds") written by 80 different conservative newspaper columnists published from 2007 to 2010. They find that these op-eds convey "a highly dismissive view of climate change and critical stance toward climate science" and play a "crucial role in amplifying the denial machine's messages".
Other researchers consider networks of actors from a cross-section of the categories. Goldberg, Marlon, Wang, van der Linden, and Leiserowitz (2020) investigate 14 pairs of consecutive election cycles and find that oil and gas companies systemically provide financial support to "anti-environmental" politicians whose policy positions and voting history on the environment align with their interests. Farrell (2016aFarrell ( , 2016b identifies a network of 164 organizations in the climate change countermovement (CCCM), including think tanks, foundations, trade associations and grassroots lobby firms, and 4,556 individuals with ties to these organizations. He analyses all written and verbal texts produced by this network between 1993 and 2013, encompassing 40,785 texts, and finds that the organizations with corporate funding are more likely to have written and disseminated texts meant to polarize the climate change issue, and that corporate funding influences the thematic content and discursive prevalence of this material. He uses this same dataset to investigate the link between the growth of misinformation about climate change and the growing influence of US private philanthropy, and finds that the network of actors spreading misinformation about climate change were "increasingly integrated into the institution of US philanthropy" (Farrell, 2019). Note, as mentioned above, in his two earlier papers (from 2016) Farrell does not explicitly label this network of actors as purveyors of misinformation, but in his 2019 paper, he retrospectively describes them as being "actively involved in the wide-spread promulgation of scientific misinformation about climate change" (Farrell, 2019). Similarly, Brulle (2014) investigates the funding of the CCCM in the US and finds that the "overwhelming majority" of funding for the CCCM comes from conservative foundations, and additionally identifies a trend for concealing the sources of funding.
In summary, the literature suggests that corporate and philanthropic actors with a vested interest provide funding to a range of actors who produce climate change misinformation. This misinformation is then repeated and amplified through the "influencers echo chamber" of people in positions of power such as the media, politicians, and prominent bloggers, from where it reaches a wider audience, as depicted in Figure 2. Once misinformation exists online, its reach can be amplified and echoed through sharing and repetition behaviors of online social media users. Note that the stages in this model are not mutually exclusive, and as such a number of feedback loops exist such that the information does not just flow linearly through these stages. The diffusion of misinformation through social media is of particular interest given the newness of the speed and reach offered by this medium, and thus it is now discussed.

| HOW DOES MISINFORMATION ABOUT CLIMATE CHANGE SPREAD?
The broader research into the diffusion of misinformation on social media can be broadly split into theoretical and empirical approaches. Many of the theoretical models are based on the epidemic (or contagion) model, which is a mathematical model of the spread of infectious diseases in a population, whereby an "infected" person can spread the information to a person who is "susceptible" through social connections, who in turn can then infect another person, possibly resulting in an epidemic (Amoruso, Anello, Auletta, & Ferraioli, 2017; Qiu, Oliveira, Sahami Shirazi, Flammini, & Menczer, 2017;Shah & Zaman, 2011;Törnberg, 2018;Webb et al., 2016). From a different perspective, Karlova and Fisher (2012) investigate the spread of misinformation in terms of human information behavior, and introduce a social diffusion model of misinformation, whereby misinformation is a product of a social process where it is formed, spread to others, judged and used, all against a backdrop of social, cultural, and historical factors.
An important research question in misinformation concerns how the structure and characteristics of online social networks influence the diffusion of misinformation, and the role played by human information behavior. Several interconnected characteristics of social networks emerge frequently as factors in the way misinformation diffuses. Homophily is the ubiquitous tendency for humans to be linked to others who share their traits, observed in almost all (offline and online) social networks. This behavior is encouraged by the construction of many social media platforms, which promote homophily by suggesting new connections based on matching of user profiles or mutual connections, for instance, Facebook "friends" and Twitter "followers." Homophily is colloquially phrased as "birds of a feather flock together" (Himelboim, McCreery, & Smith, 2013;McPherson, Smith-Lovin, & Cook, 2001). In the present context, homophily implies that a user's engagement with online content correlates with the number of friends in their local F I G U R E 2 The climate change misinformation network [1: Bjornberg et al., 2017;2: Dunlap & McCright, 2011;3: McCright & Dunlap, 2011;4: Dunlap, 2013;5: Schafer, 2015;6: Ding et al., 2011;7: Elgesem et al., 2015;8: Sharman, 2014;9: Matthews, 2015;10: Boussalis & Coan, 2018;11: Elsasser & Dunlap, 2013;12: Farrell, 2016a;13: Farrell, 2016b;14:Farrell, 2019;15: Brulle, 2014;16: Goldberg et al., 2020] network that have similar consumption patterns (Bessi et al., 2015b;Shin et al., 2017). Homophily can lead to homogenous clusters of like-minded users, also known as "echo chambers," where information (and misinformation) "echoes" round the group (Shin et al., 2017;Vicario et al., 2016). Online forums, which permit users a high level of selectivity in the discussions in which they participate (e.g., Reddit "subreddits"), also encourage like-minded groups to form, as it is easy for users to leave groups where their views are not shared. Echo chambers can exacerbate opinion polarization (Sunstein, 2007), characterized by polarized communities formed around particular issues or types of content. Shin et al. (2017) and Shao, Hui, et al. (2018) find that polarization determines the selection of content to share, while Bessi et al. (2014Bessi et al. ( , 2015aBessi et al. ( , 2015b and Mocanu et al. (2014) find that the probability of individuals continuing to consume posts BOX 1 Online misinformation-Key terms and concepts increases with how polarized their views are, which drives the cascade size (Vicario et al., 2016). The algorithmic popularity bias of online social networks whereby content is promoted based on being engaging rather than on trustworthiness is also found to be a contributing factor to polarization (Sirbu, Pedreschi, Giannotti, & Kertész, 2019).
Human behaviors identified as factors in the spread of misinformation include belief systems and confirmation bias, whereby users preferentially consume information that has coherence with their belief systems; and social norms, whereby acceptance of misinformation is strongly influenced by social norms, with people tending to trust information received from people within their social network (Bessi et al., 2014;Friggeri et al., 2014;Mocanu et al., 2014;Vicario et al., 2016).
To date there has been little research specifically into the diffusion of climate change misinformation. However, previous research using social network analysis of highly active social media users discussing climate change found strong homophily between polarized "activist" and "skeptic" groups and evidence of echo chambers (Williams, McMurray, Kurz, & Lambert, 2015), suggesting that climate change debate on social media might be susceptible to the diffusion of misinformation. Research has also shown that ideology, values, and social norms play an important role in people's attitudes to climate change (Corner, Markowitz, & Pidgeon, 2014;Gifford, 2011), and proposed that climate change is a belief system (Bhagwat, Economou, & Thornton, 2016), suggesting that social media users with particular ideologies, belief systems, and perceptions of social norms about climate change may be more susceptible to spreading, consuming, and accepting climate change misinformation. Figure 3 shows a summary of the interconnected characteristics of online social networks and their underpinning human and platform factors which increase the susceptibility of social media users to spread, consume, and accept misinformation.

| WHY DOES MISINFORMATION ABOUT CLIMATE CHANGE MATTER?
The main strategy used by the "denial machine" is to create doubt in the minds of the public. Oreskes and Conway (2011) write of the "Merchants of Doubt", and explain how a strategy to seed feelings of doubt and skepticism was first used in the tobacco industry as a way to combat the emerging links between tobacco, smoking and cancer. They quote an infamous memo by a tobacco executive: "Doubt is our product, since it is the best means of competing with the "body of fact" that exists in the minds of the general public. It is also the means of establishing a controversy." Begley (2007) refer to the "paralyzing fog of doubt around climate change". This doubt takes three main thrusts: doubt F I G U R E 3 The interconnected characteristics of online social networks, and their underpinning human and platform factors, which may increase the susceptibility of social media users to consume, accept, and spread misinformation about the reality of climate change, doubt about the urgency, and doubt about the credentials of climate scientists (Boussalis & Coan, 2018;Harvey et al., 2018;Moser, 2010).
Some researchers have suggested that climate skepticism and the doubt created by it has confused the public, increased existing political polarization, led to political inaction, and stalled support for or led to rejection of mitigation policies (Anderegg, Prall, Harold, & Schneider, 2010;Benegal & Scruggs, 2018;Brulle, 2018;Cook et al., 2018;Ding et al., 2011;van der Linden et al., 2017). Kahan (2010Kahan ( , 2012 and Pearce et al. (2017aPearce et al. ( , 2017b do not agree, in particular arguing that addressing the information deficit by attempting to quantify and communicate a scientific consensus will not remove the barriers to the implementation of climate change mitigation policy. Similarly Pearce et al. (2017b) claim that Grundmann and Stehr found scientific consensus was not an important factor in setting effective policy. However, Pearce et al. (2017a) do acknowledge that responding to climate change is a deeply political process and that social media provides a means of studying "the political life of climate change", and that discussions on social media following the publication of the 2013 IPCC report gave rise to an "attachment of new public meanings to a scientific report" which "opens a window into the politics of dissensus, rather than of consensus, which is critical to understand and engage with if widespread support for policy measures is to be gained." Boussalis and Coan (2018) found that the tactic of questioning the factual basis of anthropogenic global warming and the integrity of climate scientists appeared closer semantically to politics than science, suggesting that skepticism about climate change is often rooted in politics. It could be argued that climate change misinformation is a core factor in the "politics of dissensus," and therefore there could be validity in the claims that it has led to political inaction and stalled support for climate action.
The broader research into the impact of misinformation highlights emotional responses at an individual level including panic, suspicion, fear, worry, and anger, and the effects of these responses in terms of decisions and actions taken (Budak, Agrawal, & El Abbadi, 2011;Karlova & Fisher, 2012), while others raise their concerns about the vulnerability of/threat to/adverse impact at a societal level, particularly democratic societies, caused by misinformation (Cook et al., 2018;Lazer et al., 2017;Lewandowsky et al., 2017;Webb et al., 2016). Lewandowsky et al. (2017, pp. 7-8) go on to discuss "more insidious and arguably more dangerous elements of misinformation" such as misinformation causing people to stop believing in facts altogether, and adversely affecting trust in government services and institutions, impacting the "overall intellectual well-being of a society."

| WHAT CAN BE DONE ABOUT MISINFORMATION ABOUT CLIMATE CHANGE?
The broader research into methods to counteract online misinformation is underpinned by cognitive psychology theory which describes how people react to misinformation and attempts to correct it (see Lewandowsky et al. (2012) for a comprehensive discussion), and explores technological solutions incorporating this theory. Lewandowsky et al. (2017) coin the term "technocognition" to describe this interdisciplinary approach, while Fernandez and Alani (2018) talk of "socio-technological solutions" in their review of the current literature on solutions to misinformation. Fernandez and Alani (2018) highlight four strategies to stop the spread of misinformation online: Inoculating against misinformation as an approach to combat misinformation; responding to misinformation with facts and correct information; early detection of malicious accounts; and the use of ranking, and selection mechanisms. The latter approach would be implemented by online platforms which are commonly used to spread misinformation, such as Google and Facebook, to prevent or reduce the amount of content flagged as misinformation by their users.
A number of authors investigate the first of these strategies. With the contagion model being one of the key theoretical models for misinformation diffusion, inoculation theory is an intuitive solution; the aim with this approach is to intervene prior to misinformation being received to provide a form of "vaccine" against misinformation. This could take the form of pre-emptively providing correct information, or "prebunking" , or explicitly warning people they might be misinformed (Cook, Ecker, & Lewandowsky, 2015). Inoculation theory was first tested in the context of climate change in 2017 by van der Linden et al. (2017) who find that pre-emptively warning people about politically motivated attempts to spread misinformation helps to "promote and protect" public attitudes about the scientific consensus on climate change, while Cook et al. (2017) find that pre-emptive inoculation is an appropriate method for "neutralizing" the adverse effects of misinformation about climate change, and that the messaging should either explain the flaws in the argument of climate change misinformation, or highlight the scientific consensus. One drawback of this approach is that it depends on knowing what kind of misinformation will be distributed and being able to reach a particular audience, both of which are non-trivial practical issues.
With regard to the second strategy, three studies have looked at responding to misinformation after the misinformation has been received (Benegal & Scruggs, 2018;Kahan, 2017;Lawrence & Estow, 2017). In the first of these, a specific example of climate change misinformation, equating weather and climate, is used as a Facebook status update in a study to understand how people respond to misinformation about climate change on social media, and attempts to correct it (Lawrence & Estow, 2017). The participants held a range of political views and represented a mixed demographic in terms of gender, age, ethnicity, and education, but were predominantly American. The study finds that the most common response to the misinformation was to provide information either correcting the misinformation or agreeing with it, through sharing website links or quoting scientific findings or events that supported their views. In a second step, the participants were split into three groups and were shown either a neutral response to the original update, a corrective response, or a collaborative response, and then asked whether they would agree with or "like" the response, and what they would write in a reply, if anything. The group shown the collaborative response were significantly more likely to indicate agreement with, or "like", the response, regardless of their political orientation, while for those who elected to write a reply in response to the corrective statement the comments were more argumentative, showing some evidence of the "backfire effect," also known as "backlash", whereby individuals receiving the correcting information come to believe in their original position even more strongly (Cook et al., 2015;Garrett & Weeks, 2013;Nyhan & Reifler, 2010). More recently however, the occurrence of the backfire effect has been called into question. Guess and Coppock (2018) found no evidence of backlash in three survey experiments where theoretically it might have been expected, while Wood and Porter (2019) ran five experiments with over 10,000 subjects and tested over 50 issues that were at potential risk of backfire, and found no evidence of the backfire effect. This suggests it is far less likely to occur than may have previously been thought. A less strong, but still negative, finding relating to corrections of misinformation is a phenomenon which psychologists refer to as "Continued Influence Effect" whereby subsequent retractions do not eliminate people's reliance on the original misinformation, originally observed in two experiments on the "Perseverance of Social Theories" (Anderson, Lepper, & Ross, 1980), and more recently highlighted by Cook et al. (2015). Thorson (2016) builds on this and investigates how the Continued Influence Effect can affect attitudes rather than just reasoning/inferences, and finds that exposure to misinformation does continue to shape attitudes after it has been corrected, even when this correction is immediate, and terms this phenomenon "Belief Echoes.". The main conclusion of Lawrence and Estow's study is that a collaborative approach to responding to misinformation is recommended. Kahan (2017) investigates the science behind science communication and finds that individuals are more likely to resist the correction of misinformation when the misinformation resonates with their cultural identity and does not threaten it. He suggests that rather than simply setting out to correct misinformation, the science communication environment must be mindful also of these cultural identities. Benegal and Scruggs (2018) use a corrective approach in their study, but test the importance that the source of any corrections plays in terms of credibility. They find that "corrections from Republicans speaking against their partisan interest are most likely to persuade respondents to acknowledge and agree with the scientific consensus on anthropogenic climate change." (Benegal & Scruggs, 2018). Conversely, Harvey et al. (2018) believe that the onus is on scientists to counter misinformation on blogs and other online sources, saying they have a "professional and moral obligation" to do so.
The third strategy is early detection of malicious accounts such as bots, spammers, and astroturfers (Box 1). Fernandez and Alani (2018) highlight a potential flaw from the counteraction perspective, that it is unclear what can and should be done once malicious accounts are identified. In terms of what could be done,  suggest that partnerships between social media platforms and academic researchers could be the way forward, however, what should be done is up for debate.
With regards to the fourth strategy, the use of ranking, selection and removal mechanisms by online platforms, one paper goes so far as to say that it is the "corporate duty of developers of browsers, social media, and search engines" to combat online misinformation (Safieddine, Masri, & Pourghomi, 2016). This is related to the ongoing debate played out in academia, media, and even government of whether platforms are publishers with associated responsibility for content or not (Gillespie, 2010;House of Lords, 2018;LSE, 2018;TheGuardian.com, 2018). Pennycook and Rand (2019) propose using crowdsourcing to judge the quality of news content and then incorporating these judgments into the ranking algorithms with their results indicating this would be an effective intervention against misinformation. Vicario et al. (2016) comment that using algorithms to combat misinformation is a controversial approach, with concerns that the algorithms may not be accurate or effective, and that they may impede the free circulation of content, or as Leetaru (2018), a Senior Fellow at the George Washington University Center for Cyber & Homeland Security, puts it in his opinion piece in Forbes magazine: "actively altering the algorithmic filtering that forms the lens through which they [users] see the world", while  point out that the effectiveness is hard to evaluate.
A further, nontechnical solution to online misinformation is to introduce regulation. A number of governments both in Europe and further afield have introduced regulatory responses to misinformation, as can be seen in the guide to anti-misinformation actions around the world taken by governments on the website of the Poynter Institute3 (Funke, 2019). This is despite a report put together by the European Commission suggesting that "government or EU regulation of disinformation can be a blunt and risky instrument" (European Commission, 2018).
Within all these proposed solutions, the questions of freedom of speech and democratic rights are raised. One recent webinar by the American Library Association (ALA) captured this sentiment with a webinar titled: "Fake News or Free Speech: Is There a Right to be Misinformed?" and questioned whether suppressing fake news undermines a democratic way of life (ALA, 2019). In 2016 the Pew Research Center and Elon University's "Imagining the Internet Center" conducted a survey of technology experts, scholars, corporate practitioners, and government leaders about the potential impacts of online social interaction over the next decade. Over 1,500 responses were pulled together into four key themes. Two of the sub-themes were that technical solutions may lead to "partitioning, exclusion, and division," and that some solutions "could further change the nature of the internet because surveillance will rise; the state may regulate debate; and these changes will polarize people and limit access to information and free speech" (Pew Research Center, 2017). Mavrovic writes in an op-ed on The Prindle Post4 about "The Dangers and Ethics of Social Media Censorship", raising the concern that once a social media company calls for a ban on certain types of content, they have crossed a threshold into not valuing free speech, which then makes it easier to stifle any perspectives they do not agree with, calling this an "ultimate dystopia of controlling speech: a controlling of thought and debate in a space created precisely for thought and debate," and an "intellectually lazy and dangerous response" (Mavrovic, 2018). Similarly Henley writes in UK newspaper the Guardian of "censorship concerns" raised by the global crackdown on fake news (Henley, 2018). In 2018, the European Commission put together an independent high-level group of experts, the "HLEG," to advise on policy initiatives to counter fake news and disinformation spread online. The HLEG produced their final report in 2018 in which they listed some of the actions already being taken by platforms and other actors, and highlighted those they considered "bad". These included censorship and online surveillance and were considered bad due to the risks to freedom of expression and other fundamental rights (European Commission, 2018). In a similar vein, Tompros, Crudo, Pfeiffer, and Boghossian (2017) consider whether criminalizing false speech on social networking sites would be a contravention of the US constitutional right to free speech, in particular, the First Amendment to the US Constitution which states that "Congress shall make no law…abridging the freedom of speech." (Constitution Center, 2019).
Much of the literature looking more specifically at counteracting misinformation about climate change focuses on educational approaches. One approach based on inoculation theory is to educate people in critical thinking techniques to be able to identify misinformation: one method introduces a flowchart for evaluating the cogency of an argument by highlighting the types of fallacies of reasoning that may be expected (Cook et al., 2018), while a second method suggests teaching the "PARCS" technique, whereby the Purpose, Author, Relevance, Currency, and Sources of information (PARCS) are critically considered to assess the veracity of information (Zucker, 2019), and a third introduces a "fake news game" which prompts players to think proactively about how people might be misled on a given topic, improving their ability to recognize fake news in other contexts (Roozenbeek & van der Linden, 2019). A fourth method induces people to reflect on the accuracy of content (Epstein, Mosleh, Arechar, Pennycook, & Rand, 2019), while a fifth provides a set of guidelines in the form of four questions which help social media users evaluate the credibility of news online (Lutzke, Drummond, Slovic, & Arvai, 2019). Lutzke et al. (2019) propose that this approach should be taken in conjunction with educating people about climate change, which leads to another approach in the literature which is to simply increase the coverage of climate change in the general curriculum (Hess & Collins, 2018;Sullivan et al., 2014). McNeal et al. (2014) point out that it is important to ensure the educators themselves have an appropriate level of climate literacy to prevent misinformation spreading. Bedford (2010) proposes using agnotology-the direct study of misinformation-as a teaching tool to explore the science of global warming, and Bedford et al. (2014) present a number of case studies of this in action. Legates, Soon, and Briggs (2013) are highly critical of Bedford's proposal, calling it "profoundly misplaced and potentially dangerous", however in a response to this criticism, Bedford asserts that their critique was "based on a comprehensive misinterpretation of the original paper" and that his proposal is sound (Bedford & Cook, 2013). Legates, Soon, Briggs, and Monckton (2015) issued a rejoinder to Bedford and Cook, arguing that there was potential for misuse of agnotology where there was a "manufactured consensus" view, however, a number of independent assessments have found that the scientific community has reached a near-unanimous consensus on anthropogenic climate change (Anderegg et al., 2010;Cook et al., 2016;van der Linden et al., 2017), and so this argument is considered defunct.
A summary of the potential ways in which misinformation could be counteracted is shown in Figure 4, along with the criticisms and caveats of these solutions.

| DISCUSSION, FUTURE DIRECTIONS, AND CONCLUSION
In this overview article, we have considered misinformation in the context of climate change; see Figure 5 for a summary of the main findings. We offer a definition of misinformation (misleading information that is spread, regardless of whether there is intent to deceive), and show how this fits with the concepts of information and disinformation. Based on this consideration of the current academic literature, it seems apparent that climate change misinformation is often strongly related to climate skepticism, denial, and contrarianism (though misinformation may also sometimes manifest as climate "alarmism", see below). A network of actors is involved in the funding, creation, and dissemination of climate change misinformation, which is repeated and amplified by the media, politicians and skeptical bloggers to reach the public. In the political misinformation literature, Ehrenberg (2012) postulates that "Though the strategic spread of misinformation is as old as elections themselves, the Internet Age has changed the game". It is widely recognized that social media has made the diffusion of misinformation both easier and faster (Karlova & Fisher, 2012;Vicario et al., 2016;Wu et al., 2016), with the structure of online social networks enabling misinformation to spread like a contagion. Malicious accounts on social media such as bots, spammers, and astroturfers are also thought to play a part in the spread of misinformation. Several characteristics of (online) social networks, such as homophily, echo chambers, and polarization, as well as human behaviors such as confirmation bias, belief systems, and social norms, are all factors in how misinformation is diffused through social media. We suggest that climate change debate on social media is an area highly susceptible to the diffusion of misinformation due to the strong homophily between polarized "activist" and "skeptic" groups, with evidence of echo chambers in social media debate about climate change. We further propose that  Sullivan et al., 2014;7: Hess & Collins, 2018;8: Bedford, 2010;9: Bedford et al., 2014;10: Bedford & Cook, 2013;11: Legates et al., 2013;12: Legates et al., 2015;13: McNeal et al., 2014;14: Cook et al., 2017;15: Cook et al., 2015;16: van der Linden et al., 2017;17: Shao, Ciampaglia, et al., 2018;18: Safieddine et al., 2016;19: Pennycook & Rand, 2019;20: Fernandez & Alani, 2018;21: Vicario et al., 2016;22: Leetaru, 2018;23: Pew Research Center, 2017;24: Mavrovic, 2018;25: Henley, 2018;26: European Commission, 2018;27: Lawrence & Estow, 2017;28: Kahan, 2017;29: Benegal & Scruggs, 2018;30: Harvey et al., 2018;31: Nyhan & Reifler, 2010;32: Garrett & Weeks, 2013;33: Anderson et al., 1980;34: Thorson, 2016;35: Funke, 2019;36: ALA, 2019;37: Tompros et al., 2017] social media users with particular ideologies, belief systems, and perceptions of social norms about climate change are susceptible to spreading, consuming, and accepting climate change misinformation.
Misinformation is an important factor in public discourse on climate change. Misinformation has likely confused the climate change discourse, increased existing political polarization, led to political inaction, and stalled support for or led to rejection of mitigation policies. While attempting to quantify and communicate scientific consensus will not in itself remove the barriers to the implementation of climate change mitigation policy, it could be argued that climate change misinformation is a core factor in the "politics of dissensus", which is "critical to understand and engage with if widespread support for policy measures is to be gained" (Pearce et al., 2017a). Finally, we discuss the potential ways to counteract misinformation found in the literature, including educational approaches, inoculation, technological solutions, corrective and collaborative approaches, and regulation. The criticisms and caveats for all these solutions are also put forward: With educational approaches, there is the risk of misuse of agnotology, and a need for an appropriate level of climate literacy in educators; for inoculation strategies, it is difficult to inoculate against every issue and to identify F I G U R E 5 Summary of the key elements of climate change misinformation the target audience; there are censorship concerns for technological solutions, algorithms may not be accurate or effective and it is unclear what can and should be done once malicious accounts are detected; corrective and collaborative approaches risk the backfire effect as well as the continued influence effect, and there is the caveat that the source of corrections is important for credibility; finally regulatory approaches can be a blunt and risky instrument with overtones of "Big Brother" and potentially threatening the democratic right to free speech.
We found that much of the climate change misinformation research focuses on the network of actors who finance, produce, and repeat and amplify misinformation from a skeptic/denialist/contrarian viewpoint, or on ways to counteract this misinformation. We suggest that climate alarmism is also a potential area where climate misinformation may proliferate though no evidence of this was found. Although there is very little academic literature on this topic, it is a subject of significant discussion on social media. In July 2018, there was a lengthy Twitter debate between a number of academics about lists of "climate misinformers," which only included people whose misinformation was perceived as opposing climate action, and not those who misinformed with the intention of promoting stronger action (Twitter.com, 2018). This sparked a blog post by one academic involved in the Twitter debate (Rice, 2018, July 2), specifically about the "misinformers list" on Skeptical Science (SkepticalScience.com, 2019), in which he acknowledges that not including climate alarmists on this list is problematic. His blog post attracted over 200 comments. The following week, a Guardian article also accepted the existence of "genuine climate alarmists" (Nuccitelli, 2018), though the article noted an imbalance in power between (broadly speaking) climate "contrarian" and "alarmist" misinformers. Climate scientist Richard Betts was quoted in the piece (via a tweet) calling out misinformers "on the 'climate action' side who actively deny science & espouse conspiracy ideation […] or who unjustifiably talk-up doomsday stories". And yet, in responding to this controversy, social scientist Brigitte Nerlich has written a blog post pointing out the rhetorical danger in labeling alarmism as misinformation (Nerlich, 2018). We suggest that further study should be undertaken in this controversial area.
We found very little research on the diffusion of misinformation via social media platforms. Farrell (2016a) proposes that future studies into climate change misinformation should integrate large-scale textual analysis with social networks. Similarly, Anderson (2017) acknowledges that social media provides "space for framing climate change sceptically and activation of those with a sceptical perspective of climate change" and recommends further examination of the relationship between social media use and climate change perceptions, and Veltri and Atanasova (2015) propose that climate change discourse on social media is a "priority research area". Lawrence and Estow (2017) give three reasons why social media is important in research into climate change misinformation: (a) social media sites are increasingly used to spread news and gain support for causes, (b) there is evidence that microblogs may have greater influence than longer news articles due to be especially memorable, and (c) the ease of spreading misinformation on social media. We, therefore, end this article with a call for further research into climate change misinformation on social media. 2 The interface between (climate change/online 'or' on social media) research was not reviewed as misinformation is the focus of interest. 3 The Poynter Institute declares itself a global leader in journalism and says that it "champions freedom of expression, civil dialogue and compelling journalism that helps citizens participate in healthy democracies", and "prepares journalists worldwide to hold powerful people accountable and promote honest information in the marketplace of ideas" (Poynter, 2019). 4 A digital publication examining the key ethical issues behind current events and culture produced by The Janet Prindle Institute for Ethics at DePauw University (The Prindle Post, 2019).