Increasing Trust in Climate Vulnerability Projections

Projecting future trends is emerging as a key focus of community‐based climate vulnerability assessments. In these mostly qualitative studies, understanding of current vulnerability processes is used as a basis for identifying who and what are vulnerable to future changes, where, and why, and characterizing the key drivers of vulnerability and how they might change. Few, if any, of these studies engage with approaches for validating findings, reflecting the difficulties of validating mostly qualitative projections of highly uncertain futures, absence of directly measurable vulnerability outcomes, and lack of data on vulnerability drivers. Given the challenges of projecting future trends, this absence undermines trust in such work and limits opportunities to learn. This paper illustrates, with examples, how validation can be incorporated into the study design of community‐based climate vulnerability assessments through: (a) examination of retrospective projections to assess projection skill, (b) evaluation of projections made for future time periods which have since passed for consistency with what actually happened, (c) comparison of projections with empirical research that attempts to understand and constrain the effects of climate change using observed effects of weather variation, and (d) incorporation of demonstrated “best practices” into projection development, such as acknowledgment of areas of uncertainty, integration of diverse viewpoints, and utilization of multiple sources of information.


Introduction
The ability to anticipate, identify, and characterize potential future trends and their drivers is central to climate change research (Birkmann et al., 2021;Rising et al., 2022).There is a growing body of research herein projecting climate change vulnerability (Jurgilevich et al., 2017).This work seeks to identify the nature and degree to which climate change will affect the occurrence and magnitude of biophysical events or trends that may cause harm (exposure/hazard), understand the factors that put people and places at risk of such harm (sensitivity), affect the ability to respond to climate change to moderate potential damages and take advantage of new opportunities (adaptive capacity), and how these might change over time.In doing so, vulnerability research seeks to generate essential understanding on the multifaceted and differentiated impacts that climate change may have, for whom, where, and why (Birkmann et al., 2021;Ford & Smit, 2004;Jurgilevich, 2021;Naylor et al., 2020;Windfeld et al., 2019).In this way, vulnerability research has many parallels with studies using other conceptual framingssuch as resilience, risk, and One Health-which seek to identify and examine future trajectories of change.
Diverse approaches have been used to project climate change vulnerability, while the concept of vulnerability itself is widely debated and has evolved over time (Birkmann et al., 2021;Estoque et al., 2023;Garschagen et al., 2021;Jurgilevich, 2021;O'Brien et al., 2007).On the one hand, within the biophysical tradition, vulnerability is viewed as an outcome or residual of climate-environment interactions after potential human adaptations and/or changing behaviors have been accounted for (Garschagen et al., 2021;Naylor et al., 2020;Räsänen et al., 2016).This approach is strongly rooted in the climate/impacts modeling community, seeks to quantify vulnerability at different levels of warming and how it may change over time (including creating indices of vulnerability), and often focuses on regional scales (Ford et al., 2018;Nightingale, 2016;Preston et al., 2011;Smit & Wandel, 2006).Within the context of the IPCCs revised framing of vulnerability as a component of risk, this is analogous to the concept of exposure (Estoque et al., 2023).While focused largely on projecting biophysical changes, to varying degrees this work also integrates how socio-economic factors may offset or magnify projected impacts through potential adaptation and behavioral changes (Garschagen et al., 2021;Lanlan et al., 2023;Preston et al., 2011).
Alternatively, in the human security tradition (Nightingale, 2016;O'Brien et al., 2007), vulnerability is viewed as socially constructed, resulting from multiple interacting social, cultural, economic, and political stressors which determine how climate impacts are perceived, experienced, and responded to.Here, vulnerability is not conceptualized as an outcome or endpoint of an assessment but an ongoing dynamic process (Ayanlade et al., 2023;Garschagen et al., 2021;Ribot, 2013Ribot, , 2014)), with such work typically qualitative in nature and focusing on developing in-depth understanding of how vulnerability is produced in specific places, focusing primarily on sensitivity and adaptive capacity.The starting point for assessing vulnerability in the human security tradition is individual, household, and community dynamics and how they have evolved over time, which determine how and where climate change is relevant (Ford et al., 2019;Kuhlicke, 2023;Thomas et al., 2019).For this reason, this work is often referred to as "community-based," where "community" means some definable aggregation of households, interconnected in some way, and with a limited spatial extent (Smit & Wandel, 2006), and is analogies to the concept of "place."The human security tradition is consistent with what the IPCC refers to as "vulnerability" in its revised framing of risk, composed of sensitivity and capacity to cope and adapt (i.e., adaptive capacity), and captures how vulnerability is defined in this paper as the propensity or predisposition to be adversely affected (IPCC, 2014).
The development of vulnerability projections is more common in the biophysical tradition (Garschagen et al., 2021), but community-based assessments are increasingly using understanding of current vulnerability processes to identify and characterize potential future vulnerability (Birkmann et al., 2021;Flynn et al., 2018;Jurgilevich, 2021;Puntub et al., 2023;Terry et al., 2024).Whatever the approach used, however, projecting future trends is challenging.There are many examples of failed projections-where the projections did not match the observed outcomes-in diverse areas including business, energy, technology, health, and military planning, resulting in huge costs and missed opportunities (Mellers et al., 2017;Taleb, 2018;Tetlock, 2017;Tetlock & Gardner, 2015), with data limitations around future social drivers of vulnerability identified as a big challenge in climate change vulnerability work (Garschagen et al., 2021;Puntub et al., 2023;Roulston et al., 2022).In this context, to what extent can we trust research projecting future vulnerability trends in terms of who and what are vulnerable, where, why, and over what timescales, and what factors might drive future trends?This paper engages with this question, specifically focusing on community-based vulnerability assessments that assess vulnerability trends and drivers in specific locations and which focus predominantly on sensitivity and adaptive capacity.In doing so, the paper examines how issues of trust, legitimacy, and validation have been approached in other areas of research, identifying lessons and showcasing examples for more qualitative-focused work.Such reflection is particularly timely as the vulnerability field strives to develop a stronger and more robust futures focus.
In this paper, trust is defined as "a …. thing in which confidence is placed" (Oxford English Dictionary) and is particularly important for vulnerability research which seeks to influence decision choices at a variety of scales (Fage-Butler et al., 2022;Ford et al., 2018).Whether research is trustworthy has various (overlapping) dimensions depending on the context, aims, and objectives of a particular study.For example, scientific dimensions of trust capture the validity of the research findings and thus deal with ontological and epistemological considerations around research design, data collection, analysis, interpretation, and scientific rigor (Mastrandrea et al., 2010;Oreskes, 2004;Wilholt, 2013) (see Section 3).Public or community dimensions of trust concern how people perceive, engage with, and place confidence in scientific research, thus determining the likelihood of findings being acted upon.Here, trust is built through social processes through which researchers engage with the public and communities, value and integrate diverse ways of knowing, communicate scientific findings, and engage key stakeholders and community representatives, among other factors (Gauchat, 2023;Latulippe & Klenk, 2020;Lucas et al., 2015;Mach et al., 2020;Tranter et al., 2023).The focus of this paper is primarily on the scientific dimensions of how projections are created in community-based vulnerability research.As such, the Earth's Future

Seeing the Future
Studies that seek to make statements about what might happen in the future can be categorized as forecasts, predictions, or projections.While there is inconsistency in how these terms are used, a forecast can be understood as a calculation or estimation of future events over a specified (usually short) period of time based on analysis of past and present data; a prediction is a similar but more general term that seeks to provide an explanation of the possibility of an event in the future; and a projection concerns the potential future evolution of a quantity or set of quantities, integrating assumptions concerning potential future trends, and while typically associated with quantitative estimates can also be applied qualitatively (IPCC, 2018;Risbey et al., 2021;Tetlock & Gardner, 2015).Climate change research deals with all three, often using terms interchangeably.
In a climate change context, vulnerability studies with a futures dimension to them are concerned with making projections (as opposed to forecasts or predictions), describing potential future climate vulnerabilities and the pathways leading to them, and identifying who and what are vulnerable, where, and why, and characterizing the key drivers of exposure and adaptive capacity and how they might change (Garschagen et al., 2021;Jurgilevich, 2021;Puntub et al., 2023).In the context of community-based research, such projections are largely qualitative and narrative in nature, may extrapolate from current vulnerability or explicitly focus on potential future trends over specific timeframes, and draw upon diverse methods including interviews, surveys, life histories, focus groups, participatory videography, participant observation, scenario planning, expert judgment, and foresight etc (Ford, Keskitalo, et al., 2010;Jurgilevich, 2021;Magnan et al., 2022;Singh et al., 2019).It is noteworthy that there is some overlap to what is referred to in this paper as "vulnerability projections" and what some term as "vulnerability scenarios."Scenarios, however, are typically used for exploring heuristically the implications of different decision choices, plotting how to achieve a desired state in-light of multiple future stresses, and/or supporting decision making under uncertainty (Birkmann et al., 2020;Flynn et al., 2018).

Trust in Projections
Efforts to underpin trust in climate vulnerability projections have mostly been undertaken in studies in the biophysical tradition, and focus on model validation in the context of the development of vulnerability indices and maps that combine both biophysical and socio-economic determents of vulnerability (Birkmann et al., 2022;de Sherbinin et al., 2019).Validation concerns the establishment of legitimacy in findings, where legitimacy refers to the adherence to established norms, principles, and standards that are accepted by the research community such as methodological rigor, peer review, ethical conduct, reproducibility, identification of positionality, etc and which vary by discipline and field (Babuska & Oden, 2004;Ford et al., 2013;Menon & Stegenga, 2023;Oreskes & Shader-Frechetter, 1994).As such validity differs from verification which refers to the assertion or establishment of truth which is only logically possible in a closed system (Oreskes & Shade-Frechetter, 1994).
Different approaches to validation have been used herein.Internal validation approaches used in vulnerability indices development have focused on the inner coherence of models used, employed sensitivity analyses to examine how different model constructs and/or socio-economic indicators affect results, examined the reliability of indicators used, and conducted uncertainty tests (de Sherbinin et al., 2019;Tellman et al., 2020;Tetlock, 2017).Here, trust is inferred from coherence and process rooted in logic (Tetlock, 2017).External validation concerns the relationship between calculated vulnerability and actual outcomes associated with particular events (e.g., historic disaster losses from extreme events, mortality from climate-related disasters) (Birkmann et al., 2022;Brooks et al., 2005;de Sherbinin et al., 2019;Tellman et al., 2020;Tetlock, 2017).Trust here stems from how accurate projections are, where accuracy refers to the general validity of the projections in that they do not systematically under-or overstate vulnerability compared to observations (differing from the related term of precision which captures estimates that have small standard errors).
Validation is not a term that is widely used in qualitative community-based vulnerability research, although comparable approaches have been used to establish legitimacy in findings.These include reviewing results with study participants and those with location-specific knowledge to ensure projections are coherent and plausible; ensuring the use of best practice approaches for data collection and analysis; and examining the type, amount, quality, and consistency of evidence across multiple studies (e.g., in IPCC confidence assessments).Triangulation The importance of validating vulnerability projections is increasingly recognized in the literature although underutilized in practice (de Sherbinin et al., 2019), particularly in community-based research.This partly reflects the difficulties of validating projections made for highly uncertain futures; the nature of vulnerability which is a proxy for complex socio-ecological processes that are challenging to measure and where there is absence of directly measurable outcomes; and lack of data on vulnerability drivers (de Sherbinin et al., 2019;Patt et al., 2005;Preston et al., 2011;Tellman et al., 2020).These are compounded in community-based research by the qualitative nature of projections, focus on non-material values, and often general characterization of identified trends, drivers, and the timescales over which they will happen, which makes the use of traditional validation approaches inappropriate.For these reasons, alternative approaches to that of using projections for understanding and planning for future have been proposed, including decision-making under uncertainty and anticipatory governance (Guston, 2014;Özden-Schilling, 2023;Quay, 2010;Vervoort & Gupta, 2018).Yet, projecting the future remains important and there are other options for validation that have not been considered in the literatureherein, the next section interrogates a body of literature called the "science of forecasting" to see if there are lessons for developing vulnerability projections.

Learning From the "Science of Forecasting" Literature
The "science of forecasting" refers to a body of research that seeks to understand the processes through which accurate (and inaccurate) forecasts are created.With its roots in psychology and political science, such work pays attention to how forecasts are generated, alongside what is known about the problem itself, and has helped develop and improve forecasting success in diverse areas (Armstrong et al., 2015;Goldsmith & Butcher, 2018).Popularized through Phil Tetlock's work on "good judgment" (Tetlock, 2017;Tetlock & Gardner, 2015), and that of others (Armstrong et al., 2015;Mellers et al., 2015Mellers et al., , 2017;;Miller, 2013;Silver, 2012;Woolley et al., 2010), this work identifies key characteristics of good (and bad) forecasts.Vulnerability research in general has paid scant attention to this work, in part reflecting the goals of projections which focus more on general trends over long time periods than forecasting specific events or outcomes over short timescales where the science of forecasting literature emerged.Yet, forecasts, predictions, and projections all deal with comparable processes of identifying and characterizing highly uncertain futures and the pathways leading to them, seeking to inform decision makers on planning for the future.There is a lot the vulnerability research community in general and community-based research community in particular can learn from this literature for building trust in projections, which are examined in this section.(Hofmeijer et al., 2013;Sherman et al., 2016;Zavaleta et al., 2018) Identified vulnerability drivers (Hofmeijer et al., 2013;Sherman et al., 2016;Zavaleta et al., 2018) Observations since projections were made, drawing upon published work (Arotoma-Rojas et al., 2022;Ford et al., 2022;Zavaleta-Cortijo et al., 2020, 2023) (Mellers et al., 2017;Tetlock & Gardner, 2015)-that is, do the forecasts come true once the future has arrived?This is essential for building trust and learning, underpinned by frequent and repeated testing of easily assessed, unambiguous statements from which future forecasts can be improved and further tested.Examples where such processes have had notable success include the forecasting of weather and geopolitical events (Mellers et al., 2017;Silver, 2012;Taleb, 2018), and in climate model development (Risbey et al., 2021).External validation has almost exclusively been applied to studies making quantitative forecasts and projections, but the general principles underpinning such approaches have applicability for qualitative projections.Thus, the ability of community-based vulnerability projections to capture who and what are at risk, where, and why can be systematically compared against observations.This could be done in a number of ways: • Examination of retrospective projections can be used to assess projection skill.Here, "testing" could involve reviewing projections with people who have detailed knowledge of the region or sector of focus (e.g., decision makers, community members, practitioners, scientists) to examine if projected drivers and trends make sense in-light of local contexts, drawing upon participatory methods (e.g., focus groups, interviews, scenario planning) (Flynn et al., 2018;Holford et al., 2023).This can then inform methodologies used to develop projections and identify areas where they can be improved.• Projections made for future time periods which have since passed can be evaluated for consistency with what actually happened.While contemporary projection methodologies have advanced considerably, evaluation of those created by the state-of-the-art approaches historically can help detect over/under-confidence in results, examine differences according to methodology, and identify if projections consistently underperform in certain areas.Here, testing methodologies could include: (a) the use of scorecards where projection findings are decomposed and compared to observations using yes/no scoring, (b) the development of narratives capturing how vulnerability has evolved from which previous work can be compared to identify similarities and differences with observed trends, or (c) involve more complex study designs based on cohort and trend analysis where previous work identifying vulnerable groups and vulnerability drivers is redone in the present day and key findings are compared (e.g., re-studies (Archer et al., 2017;Fawcett et al., 2017)).Tables 1-3 (Ford, 2008;Ford, Smit, & Wandel, 2006;Ford, Smit, Wandel, & MacDonald, 2006) Dominant narratives over a decade later (Archer et al., 2017;Fawcett et al., 2017;Galappaththi et al.,   Projections can be compared to empirical research that attempts to understand and constrain the effects of climate change using observed effects of weather variation (e.g., Carleton & Hsiang, 2016;Dell et al., 2014).While this work focuses on impacts and is typically conducted at a broader scale than community-based research (e.g., national, regional, sectoral), comparison with such work can nevertheless provide an opportunity for cross-validation at a general level against observed historical relationships.This literature is advancing rapidly, underpinned by methodological and computing advances (Carleton & Hsiang, 2016;Hsiang, 2016), although is constrained to regions where there is data availability.
These examples present quite different approaches to "testing" than what has been used for vulnerability indicators or in the "science of forecasting" literature: more concerned with the general validity of projections in that they do not systematically under-or overstate key trends and drivers (i.e., accuracy) than precision of the projections made.It is these high-level findings emerging from community-based research which are central to informing decision making (Conway et al., 2019;Ford, Keskitalo, et al., 2010;Maru et al., 2014), and thus establishing their validity is essential for creating trust.The pilot applications made in Tables 1 and 2 show that projections made in the author's own work generally capture key drivers of vulnerability when compared with observations over the last decade.However, they tend to overestimate the extent to which negative trends will increase vulnerability.While acknowledging the challenge of anticipating such developments ex ante, these insights suggest the need for a greater diversity of viewpoints in my future work (e.g., greater youth engagement alongside elders) along with greater openness to the potential for adaptive learning.

Best Practices for Making Projections
Testing projections is important for building trust, but caution is also needed.Dynamic systems are complex and difficulties facing testing include the problem of false attribution, and the challenge that getting things right might be due to coincidence (Carleton & Hsiang, 2016;Oreskes et al., 1994;Özden-Schilling, 2023;Rising et al., 2022).Moreover, past projection "success" needs careful interpretation as future vulnerabilities will be determined by climatic, political, social, economic, and demographic conditions which are not stationary and interact in complex and not fully understood ways (Garschagen et al., 2021;Naylor et al., 2020;Rising et al., 2022;Simpson et al., 2021).The future may thus be "out-of-sample," particularly for projections made for decades ahead.
In the "science of forecasting" literature these challenges are addressed through a focus on the incorporation of demonstrated "best practices" into projection development (Table 4).
Some of these best practices are already utilized in community-based vulnerability assessments, albeit to varying degrees (Flynn et al., 2018;Ford et al., 2018;Jurgilevich et al., 2017;Menk et al., 2022;Singh et al., 2019).Others have received limited attention.In particular, aside from a small and nascent body of literature conducting longitudinal studies (e.g., Archer et al., 2017;Duvat et al., 2017;Fawcett et al., 2017), vulnerability assessments are rarely re-visited or updated (Kuhlicke, 2023).Projection development needs to be viewed as a long-term iterative process, where short-and intermediate-term projections are integrated into the creation of multidecadal projections to allow for regular testing, learning, and calibration as the future progresses.

Conclusion
Climate change is one of the grand challenges facing humanity this century.Projecting how these changes will affect the propensity or predisposition to be adversely affected is central for understanding who and what are vulnerable, to what stresses, and why, helping inform actions to reduce that vulnerability and build resilience.Herein, the desire to know what might happen next is deep rooted, but the future as we perceive it today is often at odds with how it was perceived in the past.Projections thus always need to be treated with caution: useful for identifying key issues and catalyzing debate but potentially dangerous if assumed to represent accurate depictions of what will happen, to whom, when, and why.Such caution is especially pertinent when using the results of community-based vulnerability assessments, where projections are rarely assessed for how well they capture current or historic trends.Armstrong et al., 2015;Mellers et al., 2015Mellers et al., , 2017;;Miller, 2013;Saltelli et al., 2020;Woolley et al., 2010) Best practices Guidance for operationalizing best practices into the design of vulnerability projections Regularly testing projections against observations -Identify the time-period for which projections have been developed, including specification of expected trends over short segments of time.These segments can be used as regular "stepping stones" at which point projected trends can be evaluated from diverse perspectives for how well they are capturing what is actually happening (e.g. using approaches from Tables 1-3) FORD Increasing trust in projections requires vulnerability researchers to have a greater focus on the processes through which projections are generated, drawing upon external and internal validation approaches used in other fields, modified for use in qualitative community-based work.This paper showcases how this can be done, using the author's own work to provide some illustrative examples.The community-based vulnerability field is in many ways well-placed to act on these suggestions, with its long history of embracing and working with diverse knowledge systems, recognition of positionality and reflexivity in research, and methodological diversity (Ford et al., 2018).Yet, the kind of support necessary for this kind of work, involving longitudinal assessment, monitoring of human-environment interactions over time, conducting re-studies, and projection evaluation as the future unfolds, is not currently prioritized by research funders, in contrast to the natural sciences where projection development processes have been well-supported (Miner et al., 2023;Overland & Sovacool, 2020).This must change if we are to generate robust and trustworthy vulnerability projections for informing decision choices in a rapidly changing climate.
evidence-based techniques are also used to establish legitimacy in qualitative research, whereby multiple methods are used to bring diverse perspectives on the same research question (complementarity triangulation), and the findings obtained from different methods are compared to reveal similarities and divergence in results (convergence triangulation)(McDowell et al., 2023;Nightingale, 2016), although have not been widely used in the context of developing community-based vulnerability projections.

Table 1
Scorecard Style Approach Used to Assess Projections Made Over a Decade Ago for Indigenous Communities in the Peruvian Amazon

Table 2
Comparing Two Dominant Narratives on Vulnerability Drivers Made Over a Decade Apart in Work by the Author With Inuit Communities in Northern Canada Focusing on Subsistence-Based Harvesting Activities (Quotes Are Taken From Inuit as Documented in Articles and Are Representative of Key Vulnerability Drivers)

Table 3
Opportunities for More Detailed Testing of Vulnerability Projections Made Over a Decade Ago for Inuit Communities in Northern Canada Focusing on Subsistence-Based Harvesting Activities

Table 4
Best Practices Identified in the "Science of Forecasting" Literature and Guidance for Incorporating Them Into Community-Based Vulnerability Projections (Building on 2016; Incorporating systematic feedback whereby projections are consistently evaluated and updated (i.e.learning) -Through regular evaluation at the "stepping stones," projections can be revisited and altered if they are not capturing the main vulnerability trends or over/underestimating developments Acknowledgment of areas of uncertainty and alternative potential outcomes -Consider how other ontological and epistemological framings may create different projections -Document and acknowledge areas where there is limited understanding Integration of diverse viewpoints and expertise within teams making projections -Work closely with holders of local knowledge, Indigenous knowledge, and practitioner knowledge in projection creation and evaluation processes -Integrate expertise from across scientific disciplines and humanities Integration of multiple sources of information (quantitative, qualitative) -Use diverse methods in projection creation and evaluation (e.g.multiple evidence based methodologies (McDowell et al., 2023), collective intelligence (Holford et al., 2023)) Acknowledgment of potential bias -Clearly state and reflect on positionality and how this might affect projection development (see Caggiano & Weber, 2023) -Identify areas where there are gaps in understanding, and data limitations which may create uncertainty Avoid making projections characterized by overconfidence, intolerance of alternative views, sweeping generalisations, and underpinned by strong ideological positions -See previous approaches for acknowledging bias and integrating multiple sources of information