Alerting the Globe of Consequential Earthquakes

The primary ingredients on the hazard side of the equation include the rapid characterization of the earthquake source and quantifying the spatial distribution of the shaking, plus any secondary hazards an earthquake may have triggered. On the earthquake impact side, loss calculations require the aforementioned hazard assessments—and their uncertainties—as input, plus the quantification of the exposure and vulnerability of structures, infrastructure, and the affected inhabitants. Lastly, effectively communicating uncertain estimates of the resulting impacts on society requires careful consideration of its function and form. All these aspects of rapid earthquake information delivery entailed wide‐ranging collaborative research and development among seismologists, earthquake engineers, geographers, social scientists, Information Technology professionals, and communication experts, leveraging diverse components and ingredients not achievable without extensive collaboration. I was very fortunate to be able to work on interesting and useful projects with many colleagues who got involved with them. Advances in content, its rapid delivery, and our ability to better communicate uncertain loss estimates greatly expanded the range of users and critical decision‐makers who could directly benefit from rapid post‐earthquake information. Moreover, in the critical user–developer feedback loop, we have intently followed requests from users to develop new ways of delivering the most‐requested post‐earthquake information within the limitations of the science and technology. Such new avenues and tools then motivated and prioritized additional research directions and developments.

The improvements in both rapidity and accuracy of near-real-time magnitude and location estimates resulted in added options for notifying users of these summary earthquake parameters.CUBE introduced automatic messaging (at first only machine-to-machine, or to personal text pagers), and Caltech provided CISNDisplay (Goltz & Eisner, 2003), which popped up on the user's computer display with the location and magnitude, to supporting members.CUBE was followed first by email listservs and later by open-access, customized alerts through the USGS Earthquake Notification System (Wald, et al., 2008).[Wald, in this case, refers to my wife Lisa Wald.After we met in graduate school at the University of Arizona, Lisa was hired at the USGS in Pasadena, California, in 1988, five years ahead of me.Her role was research, followed by education and outreach, then web development].Still, most general and even many savvy recipients have difficulty understanding magnitude scales (e.g., Celsi et al., 2005), and they do not have the intuition to relate magnitude, along with an earthquake's epicenter and depth, to the shaking intensity, let alone its potential consequences.

ShakeMap
In the late 1990s, Caltech had (and it still does today) a Summer Undergraduate Research Fellowship (SURF), which subsidized students to spend their summers working with faculty mentors; I was a USGS scientist and Adjunct Faculty member at the Caltech Seismological Laboratory.In his sophomore year, Vince Quitoriano spent the summer of 1996 working with me to make a prototype earthquake peak-ground acceleration map with the parametric data generated for real-time magnitude and location determinations (Wald et al., 1997).By the summer of 1998, we had begun to refer to our new product as "ShakeMap."The goal of ShakeMap was to "go beyond magnitude and epicenter" to depict the variations in the distribution of shaking intensity (Wald, Following the initial development of ShakeCast in Pasadena, the USGS hired Kuo-wan Lin, a geophysicist and professional programmer, to bring ShakeCast into the modern computing age: modularizing, adding full features, and improving the user experience and interface.ShakeCast is now a fully automated software system for delivering specific ShakeMap products to critical users and triggering established post-earthquake response protocols.ShakeCast generates potential damage assessments and inspection priority notifications, maps, and web-based products for critical users, emergency managers, and anyone specified on a need-to-know basis (Lin et al., 2020;Lin & Wald, 2008).

Prompt Assessment of Global Earthquakes for Response (PAGER)
The inklings of the PAGER system began prior to the devasting 25 December 2004, Sumatra M9.0 earthquake and tsunami.Paul Earle, Lynda Lastowka, and I developed the basic strategy of using predictive ShakeMaps globally, intersecting computed shaking intensities with population data, and using the population per intensity level exposures for past events for each country to calibrate and estimate event losses for current events.To expand our toolkit, we hosted a workshop at the National Earthquake Information Center in Golden, Colorado-with funding from a USGS venture capital grant-on our evolving impact assessment system.We included experts from around the United States and world in formulating the related data and hazard and loss model ingredients needed for comprehensive rapid loss estimates.Our nascent prototype PAGER system was operational during the Sumatra earthquake, providing estimates of shaking and population exposed to each intensity level.As a result, we expanded our efforts toward the more difficult challenge of directly estimating losses.In late 2006, we brought on postdocs Trevor Allen and Kishor Jaiswal to begin working rigorously on PAGER hazard and loss ingredients, respectively.In 2007, we added Mike Hearne and Kristin Marano to the PAGER team for programming and data analysis, respectively.
PAGER is now an automated USGS system that generates information concerning the impact of significant earthquakes worldwide within approximately 20 min of any magnitude greater than 4.0 event.PAGER rapidly assesses earthquake impacts by combining data about populations exposed to estimated levels of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world.The primary purpose of the PAGER system is to inform emergency responders, government and aid agencies, and the media regarding the potential scope of the disaster.Earthquake alerts-formerly sent based on event magnitude and location or population exposure to shaking-are generated based on the estimated range of fatalities and economic losses.

Landsliding and Liquefaction
A gap in our ability to forecast earthquake losses was being unable to account for the impact of various types of ground failure, particularly landsliding, liquefaction, and lateral spreading.The PAGER summary report does provide country-specific qualitative statements (Wald et al., 2010) about whether previous events in the region caused additional losses due to ground failure.However, such secondary losses are only substantial in some events (e.g., Marano et al., 2010).It is vital to distinguish which earthquakes will have additional losses due to ground failure and, importantly, to what degree landsliding and liquefaction will affect transportation, and thus relief and recovery efforts.
The USGS supports external researchers to help understand the underlying science and develop effective tools to mitigate earthquake effects.While I was coordinating USGS Earthquake Hazards Program, empirical modeling strategies for earthquake-induced landslides were developed by Nowicki Jesse et al. (2018) and liquefaction by Zhu et al. (2017), as summarized by Allstadt et al. (2022).These initial models formed the basis of the most recent addition to our rapid earthquake hazard and impact assessments, the Ground Failure (GF) product.Since late 2018, GF has provided publicly available spatial estimates of earthquake-triggered landslide and liquefaction hazards, along with qualitative hazard and population exposure-based alerts.We do so for M > 6.0 earthquakes worldwide and in near-real-time, usually within 30 min (Allstadt et al., 2022).Although the GF product currently (2023) provides critical situational awareness about the potential extent and severity of ground failure, it does not yet allow for estimates of building damage or human causalities due to these hazards.[As a side note, other faulting-and shaking-related secondary hazards can also be consequential.However, the primary USGS role in the National Earthquake Hazard Reduction Program (NEHRP) does not mandate tsunami response, which is the National Oceanic and Atmospheric Administration's role.Fire following earthquakes is too difficult to reliably anticipate with any current modeling strategies].
Although each of the above systems eventually became standard operating procedures for earthquake response and mitigation, each entailed several challenging yet tractable problems and some as-of-yet intractable challenges.I describe the solutions we considered and the challenges we still face in subsequent sections.Along the way, I note how many of our solutions developed for specific goals were widely used for other purposes, which is one of the exciting, unexpected side benefits of doing exploratory research and development in science.

Key Strategies for Constraining and Depicting Shaking
ShakeMap, the foundational system for converting shaking input to probability models for ground failure and for societal and critical facility impacts assessments, required some initial considerations that broke new ground.In our considerations, we have consistently intuitively implemented what Amazon refers to as the "working backward" approach.At Amazon, this means working backward from the customer rather than starting with an idea for a new product.Start with the users' needs, determine the goal, product, or deliverable, and then work backward to figure out what tools and products (and, for us, what scientific components) are missing.First, assemble what is available, tweaking as necessary.In this sense, the missing pieces drive the research and development needed to deliver results.We assembled many others' published results and algorithms (accepted ground-motion models, finite-fault models, and fragility functions) and filled critical gaps as necessary.Some of these research efforts were targeted in-house or by leveraging the USGS Earthquake Hazards Program's External Grant community.
Rather than being driven by purely scientific considerations or existing methodologies, pragmatism often requires such working-backward strategies.I expand with examples below, including approximate but practical solutions to problems because all models are wrong anyway.[A frequent refrain from a dear colleague, Ned Field, USGS, quoting Statistician Box's "all models are wrong, but some are useful"].Fundamentally, pragmatism dictates that we use models that are primarily empirical (or heuristic) in nature.The wide use of the term "physics-based" (e.g., Maechling et al., 2007) often gives empiricism a bad rap.Nevertheless, we still rely on empirical models to meet many of today's fundamental seismological and engineering needs.These include the widespread use of ground-motion models (GMMs) rather than numerical simulations, for instance; thus, empirical models underpin the National Seismic Hazard Map (Petersen et al., 2020) and most earthquake engineering designs; the use of Vs 30 for site amplification rather than computed amplification from measured velocity profiles; and the widespread use of macroseismic intensity for loss models in the loss and risk-modeling industry rather than instrumental measures (peak ground acceleration, for example).[Vs 30 , the time-averaged shear-wave velocity over the top 30 m of the ground, is the dominant means of characterizing site amplification].Below we provide several other examples of empirical models for calibrated, practical solutions to current shaking and loss estimation problems.

ShakeMap Intensity Measures
It was a bit of a quandary when we first approached the challenge of which intensity measures to map with ShakeMap.In 1999, we had bandwidth limits when delivering electronic content.Further, I was concerned that peak ground acceleration (PGA) maps-let alone spectra accelerations-would serve too narrow an audience, namely only earthquake engineers.Engineers think "in PGA" because PGA relates well to the forces imparted in a structure (given knowledge of its mass).However, most of the rest of us require more intuitive metrics.
I had the good fortune of traveling to Japan for collaborations several times as part of my Caltech graduate work.I thus knew that not only have the Japanese long used seismic (JMA) intensity as the means of publicly communicating shaking (and earthquakes more generally), but also the population there had become quite familiar with the concept of seismic intensity.Intensity is more descriptive and easier to understand than earthquake magnitude.I note that the JMA intensity scale is based not on macroseismic effects but on their seismic sensor recordings-an instrumental intensity, if you will (Midorikawa, 1999).That meant that in Japan, a recording at a site defined the intensity, rather than the experience or damage there, as other macroseismic scales do (e.g., Musson et al., 2010).
I also distinctly recall (many) conversations with Caltech's Tom Heaton about peak ground velocity (PGV).Tom was adamant that PGV was a better indicator of building damage than was PGA.Using PGA is acceptable for brittle structures, but PGV makes more sense for elastic structures where period lengthening occurs once inelastic deformation begins.However, PGA also suffers from high-frequency spikes that do not have corresponding higher-velocity amplitudes.For more discussion on this point, refer to, for example, Bommer and Alarcón (2006).
So, not only did we need to map PGV as a parameter in ShakeMap, but, according to Tom, perhaps it should be the parameter.I soon became convinced that we could marry instrumental intensity to peak ground motion measurements and solve two problems simultaneously.Wald, Quitoriano, Heaton, and Kanamori (1999) showed with simple regressions that bilinear relations between PGA and MMI and between PGV and MMI, which we referred to as an Instrumental Intensity Scale, would allow one to map out instrumental parameters like PGA or PGV, and depict the equivalent (converted) intensity (Figure 1).In that sense, audiences ranging from engineers to the lay public could be satisfied.
We replaced these simple relations later as the available data rapidly evolved, partly from more strong motion but also due to large amounts of DYFI intensity observations.Additionally, our ground motion conversion equations (referred to as GMICE) evolved from a simple bilinear fitting (Wald, Quitoriano, Heaton, & Kanamori, 1999) to fully probabilistic, reversible orthogonal regressions (Worden et al., 2012) that included relations between  Wald, Quitoriano, Heaton, and Kanamori (1999) and enhanced by Worden et al. (2012).
spectral acceleration (SA) and MMI in addition to PGA and PGV.Both studies above and numerous others have found PGV to be a better predictor of macroseismic intensity and damage than PGA or SA.We later found that PGV was also a better predictor of ground failure, as described below.
We then decided to further refine communication of the intensity-PGM relations by developing a standardized legend (Figure 1).We first color coded the intensity levels (or PGA or PGV) and portrayed the relationship simply for a more general understanding.Then we provided a simplified intensity description to summarize the MMI scale in as few words as practical.Intuitively, the ShakeMap legend shown in Figure 1 is quite pleasing: One unit increase in intensity corresponds roughly to a factor of two in PGA or PGV (i.e., log base 2), and the values of PGA in %g correspond roughly with PGV in cm/s, (e.g., 100%g ∼ 100 cm/s).This correspondence can be explained by the fact that in generating these conversion equations, moderate-sized earthquakes dominated the data, with spectral peaks centered near 1.5 Hz.Thus, PGV in cm/s = 2 × pi × 1.5 PGA in cm/s/s, which is greater by about a factor of 10.Then, PGV in cm/s is about 10 × PGA in gals or 1 × PGA in %g.
One debate we tackled was whether to dynamically scale the color coding for each ShakeMap to take advantage of the full spectrum to see the range of shaking.Using a fixed palette would result in small-magnitude events only showing a few colors and thus not well differentiating the range of shaking.But a fixed palette would also provide an immediate, intuitive sense of the overall scale of a shaking event as compared to other events.We opted for this, saving yellows, oranges, and reds for truly strong to damaging earthquakes.We continue to prefer this approach but recognize that it does limit the color palette for most ShakeMaps because there are many more small events than large ones.Initially, we intended to make ShakeMaps only for significant events, but ShakeMap's popularity resulted in the expectation of maps for just about any widely felt earthquake.
Our use of PGV to estimate MMI solved three additional challenges.First, occasionally small earthquakes can produce high accelerations that cause no damage; smaller events generate high-frequency accelerations, which, when integrated into velocity, are associated with low PGVs.High PGAs would predict high intensities, but the associated low PGVs would more accurately indicate a low intensity.Using PGV as the intensity indicator was critical in the mid-2010s when a significant increase in small yet shallow induced earthquakes in the central United States often generated PGAs of nearly 40%g, but their PGVs were just a few cm/s (Figure 1).We confidently mapped intensities from PGV, ignoring the potentially misleading conversion of PGA to MMI.
The second problem partly avoided by using PGV was magnitude and shaking saturation.With larger earthquakes (say, M > 7.0), PGA plateaus while PGV continues to grow.PGV is thus more consistent as a measure of larger earthquakes, for which dominant frequencies grow beyond PGA's ability to capture increasing earthquake size.Longer-period PGV continues to "see" increasing fault area and slip, whereas higher-frequency PGA is primarily sensitive to the closest portion of the fault; additional length is not "seen" by PGA.
Lastly, using the ground motion-versus-intensity conversion equations (GMICE) meant we could use macroseismic and seismic data to constrain shaking for the same map.Thus, we could make ShakeMaps from modern or historical macroseismic data, seismically derived peak parameters (PGA, PGV, SA), or any combination thereof!Because macroseismic data potentially go back centuries, we can use such data to recreate the intensity field for any earthquake of interest where felt shaking or damage reports were archived.It was also indeed fortunate and timely to have advised Aaron Meltzner for two summers (also as a Caltech SURF student, now at National Technical University, Singapore) on historical macroseismic assignments.Based on archival newspaper entries about earthquake shaking experience and the resulting damage for the foreshocks and aftershocks of the great 1857 (M7.8) and 1906 (M7.9)California earthquakes, we were able to map out their distributions for the largest recorded events in California history.Thus, the ability to understand the historical value and significance of macroseismic data was another critical factor in our choice of intensity as the metric for the signature ShakeMap products.
A parallel advancement on the DYFI (macroseismic) side of the equation that significantly improved our spatial resolution of that data was the widespread availability of online geocoding services, once quite expensive and cumbersome, yet now instantaneous and free.Whereas we initially only used ZIP code-level spatial aggregation of felt reports in the early 2000s (because we believed that most users would know their ZIP code), we now routinely extract each respondent's exact location via geocoding (Quitoriano & Wald, 2020), allowing for more accurate intensities with higher precision 1-km-gridded locations.Others have taken advantage of these higher resolutions and more numerous DYFI reports to improve GMICE relations around the globe (e.g., Caprio et al., 2015) and intensity prediction equations (e.g., Atkinson & Wald, 2007).

Geospatial Interpolation of Shaking
Several geospatial challenges required attention in interpolating seismic recordings with macroseismic intensities.For any earthquake, the density and number of constraints vary greatly, ranging from having no observations to having a thousand or more stations and macroseismic values.So, the default model for ShakeMap is to use GMMs to predict the motion everywhere and then "condition" (constrain) the estimates wherever there are observations (Wald, Quitoriano, Heaton, Kanamori, et al., 1999;Worden et al., 2018).Even in areas with many seismic stations, most ShakeMaps still have large areas without observations, so the predictive element is important for nearly all ShakeMaps.A common geospatial problem is linking measurements of interest from prior models to best estimate values (and their uncertainties) at unobserved locations of interest.Statistical interpolation methods include kriging (with-a-trend when prior prediction trends are employed), tessellation, and the multivariate normal.We have applied each as needed, depending on the nature of the constraints.

Site Amplification
For ShakeMap, the first challenge prior to interpolating ground shaking measurements was determining how to apply site amplification to the GMM shaking estimates.We chose Vs 30 as the proxy due to its wide use and applicability in GMM development.The standard strategy used by others in the field and industry was to use a model for Vs 30 -based surficial geological maps, with each spatial unit assigned an average Vs 30 value based on Vs 30 measurements in that unit.Such models use geology as a proxy for Vs 30 by determining the median Vs 30 value within each geologic unit.Park and Elrick (1998) based their early map on three general geologic units, but later Wills and Clahan (2006) used more abundant Vs 30 measurements and refined surficial geology maps, focusing on the crucial Quaternary units that host the lowest Vs 30 -and thus largest amplifications.
However, because we aimed for nationwide and eventually worldwide ShakeMap production, we needed to develop a Vs 30 site condition map for the whole planet.How could we do that?Global geology maps at the time were coarse and inconsistent in quality and availability.Thus, we needed to resort to a proxy to map Vs 30 globally.It turns out that a valuable proxy for Vs 30 is the topographic slope: topography also allows for a simple correlation of mountainous (mostly rock) areas with comparably low shaking and basins (soil) with amplified shaking (Wald & Allen, 2007).As an aside, topography also correlates remarkably well with population (Allen & Wald, 2009) because people tend to congregate in mountain-confined, flat basins that store underground water and are easy to build on.Wald and Allen (2007) first proposed this as an approximate solution to developing regional and global Vs 30 maps because the slope is well known everywhere on the planet (and geological maps are both of limited resolution and have limited applicability to surficial material properties).Our strategy has received both acclaim and criticism, but it is widely used wherever detailed geotechnical studies are lacking-still most of the planet!Thompson et al. (2014) later showed how one could apply kriging-with-a-trend to use the slope-proxy Vs 30 map along with geological units and utilize the residuals of the Vs 30 observations to condition the map at all Vs 30 data points (see Figure 2).So, for global ShakeMap use, we have now developed a hybrid global Vs 30 map that defaults everywhere to the Allen and Wald (2009) topographic slope proxy, but mosaics in better-constrained maps, such as Thompson et al.'s map for California, where available (Heath et al., 2020).Thus, the global mosaic can be slowly enhanced as better regional or national Vs 30 data become publicly available.

Interpolation Schemes
ShakeMap interpolation algorithms have evolved considerably.Recall that the goal was to merge ground motion predictions based on the Vs 30 value at each site (grid point) of interest with both seismic station shaking parameters and observed macroseismic intensities.Wald, Quitoriano, Heaton, Kanamori, et al. (1999) initially used a brute-force interpolation, downweighing shaking estimates and observations based on their uncertainties, accommodating the frequency-and distance-dependence on shaking inferred from all observations.We moved from initially using Generic Mapping Tool's (GMT; Wessel & Smith, 1995) "surface" routine (basically a spline) with little appreciation of our uncertainties to an uncertainty-weighted interpolation that took account of the spatial correlation of the observations (Worden et al., 2010).
Unbeknownst to me, several valuable geospatial processing algorithms were finding their way into more widespread use in geophysical literature and became computationally quite tractable.So, Wald et al.'s (1999b) strategy of combining uncertain shaking estimates and observations was more elegantly solved by Worden et al. (2018), using the multivariate normal (MVN) to account for these uncertainties and conditioning the final estimates by using cross-correlation of intensity measures (from observations as well as from the model) in both frequency and space.Worden et al.'s MVN strategy resulted in more accurate sampling and uncertainty calculations.It also allowed for a more refined "bias correction" (removal of a trend between the data and GMMs) akin to the standard now used in multistage GMM regression models (For details, refer to Worden et al., 2018).

GMM Selection
Another source of concern in using ShakeMap operationally was the challenge of GMM selection, particularly for earthquakes within subduction zones.Overall shaking levels produced by interface, intraslab, and crustal earthquakes are considerably different for events in each tectonic environment, so one must select the likely source regime early on; this choice affects all resulting calculations.Generally, shaking for similar magnitudes can vary depending on the stress regime, fault orientations, depth ranges, and degree of fault maturity (total geological slip).Given the uncertainties of rapidly determined hypocentral depth and source mechanism, it is sometimes tricky to confidently infer the nature of the source.As a result, we sometimes switched GMMs as our source parameters were revised, resulting in substantial changes in predicted ground motions and thus losses.We first developed a strategy for determining which tectonic regime the source was in and then ascertained the most likely source based on its depth and mechanism (Garcia, Wald, et al., 2012).We later augmented this strategy to include multiple, weighted GMMs, forming a Ground Motion Characterization Model, or GMCM, and then provide a weighting of each GMCM based on the likelihood of each subduction zone earthquake type being the source.ShakeMap currently (2023) uses a system wherein small changes in earthquake location, magnitude, and depth result in only smoothly varying changes in the applied GMCM and the resulting output ground motions.
Our need to better define the three-dimensional (3D) subduction zone interface geometry to know if the event of interest was above, at the interface of, or within the subducting slab motivated a side project, Slab1.0 (Hayes et al., 2018) that delimited the 3D slab geometries around the globe.Hayes et al. (2018) continue to refine the slab models, which have become a standard tool for inferring earthquake sources and geometry and for a wide range of other tectonic and geodynamic applications.

Fault Dimensions
A critical component for ground motion predictions that use GMMs is establishing the extent of fault rupture-or rather, its geometry, at least for moderate-to-large events (∼M6.0 and larger).Referred to as fault finiteness, fault geometry often controls the location and amplitude of the strongest shaking, particularly when directivity effects become dominant.Source directivity, or focusing of energy in the direction of rupture, can profoundly affect shaking and thus structural response.Years earlier, I had spent considerable time determining large earthquakes' spatial and temporal slip distribution, recognizing the importance of the source characteristics, including finiteness and directivity, in producing strong motions (Wald, 1993).Finite-fault analyses illuminate the contributions of slip rise-time, slip velocity, rupture velocity, and source directivity on the resulting ground motions.Once established, we could feed the fault dimensions into the GMMs used in ShakeMap to improve shaking estimates considerably.
In ShakeMap, constraining the GMMs on the fault location was then simply a matter of using the proper distance measure used in the multitude GMMs deployed.The lack of a finite fault model in the immediate hours after a large earthquake motivated two important developments.First, as an approximation for not knowing the faulting geometry, we used mapping schemes developed to convert hypocentral distance based on magnitude-scaled distance corrections.Following a study by the Electric Power Research Institute (EPRI, 2003), we assumed that WALD 10.1029/2022CN000200 9 of 33 an unknown fault of appropriate size could have any orientation, and we used EPRI's equations to compute the distance that produced the median ground motions of all possible fault orientations passing through the hypocenter.Thus, for each point at which we wanted ground motion estimates, we computed this distance and used it as input to the GMM.We also adjusted the shaking uncertainty of the estimates to account for the lack of knowledge of the fault geometry.Later, Thompson and Worden (2018) developed much more comprehensive distance measures that approximate finiteness when only the hypocenter is known, and we currently use these in ShakeMap (Worden et al., 2020).
An even more ambitious and rigorous strategy was to produce finite fault models of the faulting rupture geometry more rapidly, as well as the spatial and temporal slip distribution (rise time and rupture velocity) of the fault.At the time of my Ph.D. work (in the late 1980s), each new earthquake required roughly 6 months of diligent work to adequately model a finite fault because data were primarily analog, requiring digitization, and most software tools used were one-off and often idiosyncratically customized.Applying these tools in near-real-time remains a work in progress, but tremendous gains have been made in the intervening three decades.Without detailing the rapid evolution of novel approaches to efficiently determine fault finiteness, suffice it to say that for some earthquakes, fault modeling is a critical constraint for ShakeMap and thus for all downstream products (e.g., Wald et al., 2021).The plethora of finite fault models generated for earthquakes worldwide led me to work with another Caltech SURF intern, Matt Bachmann, to help develop what I dubbed a "Finite Fault Repository" in 2000.For each finite-fault model we could access, we provided and represented the seismic traces in map view, depicting the relationship between the fault orientation and the resulting directivity pulses.Martin Mai (Mai & Thingbaijam, 2014) took over this effort, and newly published fault models are deposited and archived in their SRCMOD database in a standardized format.
At Caltech in about 2001, graduate student Chen Ji led an effort, along with Caltech's Don Helmberger and me, to improve the resolution of finite fault parameters more systematically and quickly (Ji et al., 2002a(Ji et al., , 2002b)).Then, in late 2002, my wife Lisa and I were offered the opportunity to transfer to Golden, Colorado, to continue our USGS work at the National Earthquake Information Center (NEIC).Recognizing the need for fast finite fault models at NEIC, we worked with Chen, then at U.C. Santa Barbara, to help us adopt his teleseismic inversion codes for operational use at NEIC (e.g., Hayes et al., 2011).We later also worked with Caltech's Hiroo Kanamori to help implement his mantle-wave, moment magnitude (M w ) code (Kanamori & Rivera, 2008) at the NEIC (Hayes et al., 2009).With the former, NEIC could rapidly and remotely (within a few hours) resolve fault finiteness for M > 7.0 earthquakes around the globe with teleseismic data alone; with the latter, we could then determine M w for the greatest earthquakes the planet could produce.Efforts are ongoing at the NEIC to augment our teleseismic finite-fault inversion strategy with strong-motion and geodetic data (Goldberg et al., 2022).The importance of inverting global and local seismic data jointly with geodetic observations can help constrain not only the slip distribution but can also increase resolution of both rupture and slip timing as shown by Wald and Heaton (1994), and numerous studies since.

Scenario ShakeMaps
In planning and coordinating emergency response, utilities, local government, and other organizations are best served by conducting training exercises based on realistic earthquake situations like those they are most likely to face.ShakeMap scenario earthquakes can help fill this role.A scenario represents one realization of a potential future earthquake by assuming a particular magnitude, location, and fault-rupture geometry and estimating shaking.Whether computed with empirical GMMs or by 3D ground motion simulations (for example, Graves et al., 2011Frankel et al., 2018, the deliverables-formatted with standard ShakeMap layers and served via ShakeMap webpages-allow users to consider mitigation actions and run earthquake drills with the same products and formats they will ultimately use after any significant real earthquake. In addition to historical and near-real-time applications, ShakeMap has now become widely used for earthquake mitigation and planning exercises through earthquake scenarios (e.g., Worden et al., 2020).The Federal Emergency Management Agency (FEMA) was an early adopter and frequent scenario requestor, with the first instance in 2001 for an M7.0 on the Rose Canyon Fault, examining such a scenario's impact on the San Diego, California, region.To facilitate widespread use of ShakeMap and FEMA's Hazus loss-estimation software (NIBS-FEMA, 2006) we began to generate specific ShakeMap output formatted specifically for input into Hazus.Through collaboration with FEMA, the USGS has provided numerous such scenarios for use in National Level Exercises (NLEs) widely used by federal, state, and local agencies in many seismically active states.For example, Figure 3 shows selected ShakeMap scenario intensity maps for certain regions of the country atop a basemap of the USGS National Seismic Hazard Model.By selecting faults of particular interest, we can offer the usual ShakeMap products to be used for planning exercises and mitigation analyses.
All that is required is to assume that a particular fault or fault segment will (or did) rupture over a certain length and with a chosen magnitude and to generate one file describing the fault geometry and another describing the magnitude and hypocenter of the ostensible earthquake.ShakeMap can then estimate the ground shaking at all locations over a chosen area surrounding the fault and produce a full suite of data products, just as if the event were an actual earthquake.We have developed tools to make generating a ShakeMap earthquake scenario relatively easy.However, we make several specific configuration changes for scenario events compared with actual events triggered by the seismic network.We certainly do not want to automatically deliver scenario alerts to customers anticipating actual events.Nevertheless, to be useful, the ShakeMap scenario products and maps are identical to those made for actual earthquakes, with one exception: we label them with the word "SCENARIO" prominently displayed to avoid potential confusion.
The scenario business is quite robust.ShakeMap scenarios, as well as the downstream products, are in demand by a variety of use sectors.Our strategy includes taking specific requests for one-off scenarios, typically from critical users or local, state, and federal governments and agencies; generating collections of scenarios for regions that often perform exercises or drills (California, Hawaii); and building standard suites of off-the-shelf collections and posting them online (refer to the BSSC Catalogue).More detailed information on ShakeMap scenarios is available in the ShakeMap Manual (Worden et al., 2020).

Historical ShakeMaps
As mentioned previously, an important side benefit of utilizing macroseismic data in ShakeMap, although not as well appreciated as real-time and scenario maps, is that these data can be assigned to historical earthquakes for which eyewitness accounts and damage data were preserved.For example, macroseismic effects for events such as the great 1906 M7.9 San Francisco earthquake were extremely well documented (e.g., Boatwright & Bundock, 2008;Lawson, 1908) and led to a well-constrained ShakeMap (Figure 4).In Japan, a comparable number of seismic stations recorded the great 2011 M9.1 Tohoku earthquake, allowing a direct comparison of these two monumental events separated by over a century.The 1906 ShakeMap example is the modern equivalent  (Lawson, 1908), one of the first comprehensive studies of such a major urban earthquake.
In aggregate, using historical intensities as well as published finite fault models and ground motion recording (where available), we have generated over 14,000 ShakeMaps for earthquakes around the globe from 1900 through 2020.This of "Atlas" of ShakeMaps (Garcia, Mah, et al., 2012; Version 4, updated by online: https:// earthquake.usgs.gov/data/shakemap/atlas/) is critical for depicting and understanding shaking and the effects of past earthquakes.It is essential for earthquake forensics, calibrating models of earthquake losses and ground failure estimates, and simply depicting shaking for events of historical significance.

Spatial Variability in ShakeMap
ShakeMaps, when used operationally-or for historical earthquakes that are well-constrained by data-explicitly include a component of the true correlated spatial variability contained in the observations.By default, scenario ShakeMaps (or those with few observational constraints) provide only the median peak ground motion estimates (Worden et al., 2020) and do not capture the random variation among the intensity measures (IMs) and between observations.That is, above and beyond the relatively smooth shaking predictions made using GMMs, the greater spatial variability of ground shaking produced naturally by fault finiteness, directivity, wave propagation, and site amplification is not realized.However, it is widely understood (e.g., Bazzurro & Luco, 2005) that to capture potential correlated losses, proper treatment of the IM correlations requires a thorough statistical sampling of the GMMs' inter-and intra-event correlations.In fact, capturing these effects, particularly at the higher range (right tail) of the loss distribution that is of most concern to reinsurers, requires computing losses for hundreds of (real or scenario) earthquakes realizations.Some samples will undoubtedly have strong shaking concentrated in areas of dense buildings or portfolios, and those cases would have the heaviest losses.For scenarios, or for events with no data, hundreds of randomized samples of such distributions are not computationally overwhelming (Silva & Horspool, 2017).However, when many stations contribute, generating numerous ShakeMap realizations can be prohibitive (Verros et al., 2017).Verros et al. provided an initial strategy for efficiently computing many versions, and the ShakeMap team later refined this process (Bailey et al., 2022).Because storing and delivering hundreds of realizations is impractical, rather than computing and hosting them, we chose to allow users to render multiple realizations for any ShakeMap by providing post-processing tools.

Developing Near-Real-Time Global Earthquake Loss Models
Above and beyond constraints on shaking (i.e., a ShakeMap), making shaking-induced loss estimates requires an inventory of exposed buildings or other assets (e.g., bridges, pipelines, insured portfolios) including the seismic fragilities or vulnerabilities of each asset.[There are varying definitions, but a fragility function relates shaking to a damage state (minor, moderate, major).In contrast, a vulnerability function directly relates shaking to an outcome (fatalities, dollar losses, e.g.,)].In addition to building such inventories, given the estimated damage state for each structure, a mechanistic (or "physics-based") approach to modeling casualties also requires demographic data about building occupancy (as a function of time of day) and an estimate of the casualty rate given each building type's damage condition, particularly for collapse (So & Spence, 2013).For financial loss estimates, additional estimates of each building's value and repair/rebuilding cost are needed (Jaiswal & Wald, 2008, 2012).
Despite improved datasets of building footprints (e.g., Google Maps), national population statistics, and housing surveys, at the global scale the unknowns in the data mentioned above vastly outnumber the knowns.Hence, and the assumptions necessary for rapid loss estimates beget empirical approaches.Alternative near-real-time loss estimation strategies with a purely physics-based approach are currently beyond reach both due to the lack of data constraints needed for the models, and computational challenges.

Empirical Efforts Versus Mechanistic Model Strategies
Most mechanical models ultimately rely on matching (empirical) data to gain some confidence in the physics used.Several seismological examples come to mind: probabilistic seismic hazard analyses still predominantly use empirical GMMs rather than physics-based wave-propagation models; site amplification is routinely based on Vs 30 (or empirical site terms) in GMMs rather than being based on the measured impedance contrasts in the soil column; and regional earthquake loss models often use macroseismic intensity-based vulnerability curves rather than using ground motions propagated through to structural response calculations.To this day, empirical models drive building codes and insurance rates and direct the priorities for mitigation resources.
However, the value of mechanistic models in informing empirical models cannot be overstated.The form of regression-based empirical GMMs, for example, benefits from heuristics learned from physical models of wave propagation.Moreover, empirical models are less data-rich than physical models.However, therein lies a more critical issue: empirical models, by nature, can be easily calibrated; mechanistic models usually have too many poorly constrained physical parameters to separate and calibrate them all (and even when they can be calibrated, they become almost impossible to apply except under ideal circumstances).These realities and realizations led to the development of PAGER's operational global empirical loss models for fatalities and economic impacts, calibrated for individual (or groups of) countries.

Developing Empirical Loss Models for PAGER
Developing an empirical model capable of reliably producing shaking-based loss estimates worldwide presented significant challenges and required new datasets and modeling approaches.Around 2008, our strategy for deriving empirical loss (vulnerability) functions for PAGER began with developing an extensive catalog of historical ShakeMaps (ShakeMap Atlas, Version 1.0; Garcia, Mah, et al., 2012).For each earthquake, we then computed the population exposed to each shaking intensity level (Expo-Cat; Allen, Wald, et al., 2009) and the associated losses due to shaking (PAGERCat; Allen, Marano, et al., 2009).At this point, we had the tools to at least compare the population exposed to each shaking intensity level with past event losses in near-real-time.
We referred to this system and series of products as expoPAGER.Ultimately, after analyzing nearly 40 years of significant earthquakes, these three ingredients (historical ShakeMaps, populations exposed to each intensity at the time of each earthquake, and events' impacts) allowed us to derive country-specific vulnerability functions (Figure 5).These functions are the basis of the PAGER empirical model, which we use to estimate losses for events worldwide (Figure 6, after Jaiswal & Wald, 2010).We referred to this system and series of products as lossPAGER and, at that point, began working on depicting and communicating uncertain losses.In parallel with the operational empirical fatality and economic loss models, we developed semi-empirical models (Jaiswal & Wald, 2011) and analytical models (Jaiswal & Wald, 2012) for loss estimation.The motivation for choosing these three modeling approaches was that they are known to be applicable under different conditions.The empirical model depends primarily on having numerous instances of earthquakes with both ShakeMaps and available reported fatality and economic losses with which to calibrate vulnerability curves.In contrast, the semi-empirical and analytical models rely on building-specific fragility curves to estimate structural collapse rates that are then assigned associated fatality rates that allow loss calculations.Thus, both mechanistic models require inventories of buildings and their fragilities.The difference between semi-empirical and analytical models is principally that the former uses empirical models of fragility as a function of intensity and the latter is based on analytical models of damage as a function of shaking.
Because building inventories and their structural responses, as well as building occupancies, are widely known and available in highly developed countries, particularly those with substantive building code implementation (e.g., HAZUS methodology in NIBS-FEMA, 2006) the mechanistic models are expected to be suitable for countries with advanced anti-seismic designs.[Note that Hazus uses mapping schemes to estimate building type and occupancy class.Therefore, it generally does not have specific building inventories].What's more, the success of those building codes has led to fewer fatal earthquakes, making empirical models more challenging to calibrate from past events.Conversely, empirical models are suitable where losses are frequent and building inventories and fragilities are difficult to obtain.For regions that have experienced numerous earthquakes with known fatalitiestypically developing countries with dense populations living in vulnerable structures-sufficient data exist to calibrate fatalities from the historical earthquake record alone (Jaiswal, Wald, Earle et al., 2009;Jaiswal, Wald, & Hearne, 2009).Meanwhile, in such regions, building inventories are typically lacking, as are systematic analyses of their vulnerabilities; hence, analytical tools are inadequate for loss estimation.The two mechanistic models are better constrained in nations where adequate building inventories, costs, and fragility curves are available.
Our initial intuitive expectation was that we would develop an operational system that combined the best features of the empirical, semi-empirical, and analytical PAGER models, each weighted according to its country-specific accuracy and uncertainty.We decided that analytical models would receive more weighting in those countries where inventories and loss models matured to the point of validation (e.g., Hazus in the United States).We expected empirical models to serve best primarily where such constraints were lacking and, conversely, where loss data were more robust.
For domestic (U.S.) earthquakes, FEMA's Hazus (FEMA/NIBS, 2006), which is effectively an analytical model, provides more precise (though not necessarily more accurate) loss estimates, with additional loss details at the census tract level, including causalities, debris, building tagging, and others, than even the current PAGER models (Wald et al., 2019).[Hazus fragilities are analytically derived, but despite this mechanistic nature of the model, Hazus still requires heuristic and empirical assumptions, such as fatality rate (assumed to be 5% or 10%, depending on structure type)].However, Wald et al. (2019) determined that Hazus' economic loss models were not particularly accurate for events with lower damage thresholds and tended to overpredict fatalities compared to PAGER's calibrated model.
A critical realization was that although we could calibrate the empirical model, and uncertainties could be determined based on performance in hindcasting, mechanistic models-consisting of more, complex dependent variables-we are restricted to forward modeling (past building inventories do not exist).Moreover, evaluating the latter's performance was only possible for the very few earthquakes where extensive data of inventory and losses per structure type were available.For these reasons, it was challenging not only to evaluate in which countries the semi-empirical model is applicable, but also to constrain which components of the model would need improvement without much more extensive and comprehensive (geolocated) loss data (Wald, Earle, et al., 2008).
There are several assumptions and other limitations given a purely empirical model; such models break down where data are insufficient for calibration.Thus, for countries with few earthquakes since 1970 (before which population data are poor), Jaiswal and Wald (2010) aggregated counties into groups with similar vulnerabilities based on economics, building practices, and climatic considerations (which drive building practices, like wall material and thickness).Here again, heuristics and specialized mechanistic models of specific building types helped inform the empirical model's development.
However, given the limited data for each country or group of countries, it was difficult to divide these earthquakes further to calibrate additional parameters expected to be necessary.For instance, building occupancy typically shifts between day, night, and commute times.But this could not be readily incorporated into our empirical model, because dividing a small data set into three periods resulted in too few data points for robust vulnerability curve calibration.Likewise, although we can approximately correct population exposure with time, building inventories and construction practices can change (albeit typically slowly), modifying the overall national vulnerability compared to the average over four decades.Much of the changes happen regionally, and urban centers are often altered more than rural areas.Empirical models cannot easily account for such changes.
Whereas empirical models, by nature, can be calibrated and recalibrated, mechanistic models afford more opportunities for testing causal pathways and offer the potential for refined geospatial impact assessments, with additional loss metrics that can be modeled.The two mechanistic models have inventories, fragilities, and demographic (occupancy) data sets that generate many content-rich output products.For instance, we could estimate the number of building collapses for each building type and show the geographic distribution of estimated losses.Thus, even lacking complete confidence in the overall losses, the semi-empirical model can infer which structure types will dominate, for example, Urban Search and Rescue needs, potentially helping to guide post-event decision making.
Our global empirical loss-model calibration revealed some important differences in national population vulnerabilities as they were explored and quantified for the first time (Figure 6).For example, we noted that the fatality rates were nearly four orders of magnitude higher in Iran than in California (Jaiswal & Wald, 2010).Based on our models, at intensity IX, the fatality rate in Iran was near 20%; that is, one in five people exposed to intensity IX shaking in the past perished.In California, the rate was only 0.003%.These vast differences in fatality rates can be attributed primarily to the effects of lax building code enforcement and heavy, predominantly adobe residences in Iran, compared with strict building codes and predominantly wood-frame residences in California (e.g., Spence & So, 2021).These are the vulnerabilities at the extremes, and other nations fit somewhere in between, again, primarily due to the collective vulnerability of the buildings in the region.
Despite calibration, the inherent uncertainties in PAGER loss estimates have a confidence level of only half an order of magnitude.A critical recognition was that communicating highly uncertain earthquake losses, particularly fatalities, could be fraught unless they were carefully presented.After myriad iterations and informal user feedback sessions, we chose to portray uncertain fatality estimates with an Earthquake Impact Scale (Wald et al., 2011), shown with the histograms atop Figure 7.These histograms are based on two complementary criteria.One, the estimated cost of damage, is most suitable for domestic (U.S.) events; the other, the estimated fatalities ranges, is generally more appropriate for global events, particularly in developing countries.Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are effective in communicating predicted impact and response needed.Alerts are green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response).PAGER's simple "traffic light" alerting levels and fatality and economic loss histograms led to its signature product, the onePAGER (Figure 7).Also included in that product is the primary, event-specific content the PAGER system generates: population exposed to each intensity level, intensity and population in nearby cities, maps of population and intensity, description of vulnerable structure types, and potential for additional losses due to ground failure hazards-all detailed in the USGS PAGER Fact Sheet (Figure 8; for details refer to Wald et al., 2010).
The rationale for a dual approach to earthquake alerting stems from recognizing that relatively high fatalities, injuries, and loss of housing predominate in countries where local building practices lend themselves to high collapse and casualty rates.These impacts help with prioritization for an international response.In contrast, financial and overall societal impacts (rather than casualties) often determine the level of response in regions or countries where prevalent earthquake-resistant construction practices significantly reduce building collapse and resulting fatalities.For example, FEMA's response, whether a single, multi-region, or national-level response, was considered in the color-coded levels of PAGER's estimated financial impacts (Wald et al., 2011) because the agency is responsible for considering levels of response and recovery funding.
Working in conjunction with colleagues at FEMA, we developed a hybrid product, the twoPAGER (Figure 9), with a combination of PAGER and Hazus loss models for domestic earthquakes (Wald et al., 2019).The twoPAGER provides content about losses beyond that of PAGER, including county-specific losses and estimates of the number of buildings expected to be green-, yellow-, and red-tagged during subsequent building inspections.For post-event analyses, the comparison of PAGER and Hazus economic losses (arrow in Figure 9) can inform confidence in the estimates, which is essential because these evaluations are often used for state or federal disaster declarations (Wald et al., 2019).

Data Collection Advancements
Support for our extensive data collection efforts, coding, and product development and delivery was provided initially by a presidential disaster relief supplement for Indonesia after the Great (M9.1) 2004 Sumatra earthquaketsunami, replenishing expenses spent by the U.S. Agency for International Development's (USAID) Office of Foreign Disaster Assistance.Then, with research and development support from USGS, we put an initial vision for PAGER into place with additional team members.The USGS team we assembled consisted of seismologists Paul Earle, Lynda Lastowka, Trevor Allen, and Daniel Garcia, loss modeler Kishor Jaiswal, geophysicist Kristin Marano, and scientific programmer Mike Hearne.Early collaboration on the PAGER project was beneficial, including with civil engineer Keith Porter, and later scientists at the Global Earthquake Model (GEM) consortia.
Then-emerging data sets on hazard, impact, and exposure components were key drivers of our success.We were able to tap into the evolution of the LandScan (e.g., Bright et al., 2011) population database, which Oak Ridge National Laboratory was developing and constantly improving.The alternative to a population grid was to use city and settlement databases, which are notoriously hard to define for large cities and often miss small cities and rural settlements-shaking would have to be sampled at an assumed point location for a city, although the data are in fact distributed.In contrast, sampling with a grid over the region counts the population more accurately.Notably, the LandScan data provided temporal snapshots of global population.With resampling based on census growth curves, we estimated the population exposed at each intensity level for thousands of historical earthquakes (Jaiswal & Wald, 2010).Globally, the LandScan 1-km-gridded population model served as the foundation for estimating population exposure to each intensity level for past earthquakes (by back-correcting for population growth) and present ones.
Even though the latest ShakeMap Atlas (Version 4, online) has earlier important events that we might use for calibration, population uncertainty would be too large to be helpful.So, we settled on 1970 as the earliest date for which we were comfortable extrapolating population distributions for loss model calibration.However, rapidly growing populations, urbanization, and mass migration, along with changes to building codes (and thus seismic behavior), cannot be captured in an empirical loss model like ours.These changes are, in some cases, a significant source of model uncertainty.A mechanistic model could handle such changes via updates to exposure and vulnerability databases.However, the temporal and spatial variables needed to track such changes introduce similar challenges given census, housing, and other demographic uncertainties.

The Diffusion of Innovation and Information-Communicating Uncertain Shaking Patterns and Their Impacts
Innovations entail both logistical constraints and careful consideration of communication strategies, the latter of which wasn't really covered in earthquake school.As such, we followed the tenets of the Diffusion of Innovation (Rogers, 2003) to bridge the gap between our work and the public.Among the strategies we considered for facilitating the use of these shaking and impact assessment systems included avoiding jargon, adopting memorable acronyms, using catchy terminology, and developing recognizable signature products (e.g., Rogers, 2003).Effective marketing and employing early adopters to help spread the word on the utility of such products were also consistent with lessons learned from Rogers' insights.Another key consideration-and a common pitfall in conveying science to non-scientists-was communicating uncertainties in a way that gave users reasonable expectations and avoided undermining trust in these products.

Memorable Acronyms & Catchy Terminology
DYFI started as the Community Internet Intensity Map but was popularly coined "Did You Feel It?" based on the web interface's initial query (Wald et al., 2010).Initially, I was concerned that this colloquial phrasing would not be appropriate following a destructive earthquake, but the term held.We have yet to have a sufficiently devastating earthquake in the United States to test this concern.
Our earliest acronym for PAGER was "RAGE," Rapid Assessment of Global Earthquakes.While it was descriptive, as USGS' Woody Savage said, "Don't you think that sounds a bit angry?"We appreciated this early public relations lesson, and instead went ahead with "PAGER," Prompt Assessment of Global Earthquakes for Response.Of course, at the time we came up with the PAGER acronym, messaging really did require a pager (a small telecommunication device that received radio signals from a paging network).Texting via smart phones were not yet part of the scene.Fortunately, "ShakeMap" has no such legacy issues!The USGS chose to trademark "ShakeMap," "ShakeCast," and "Did You Feel It?" to avoid the potential commercialization of these products.Because it is an acronym widely used with other phrases and words, we did not attempt to trademark "PAGER."

Signature Products
Whether in marketing, education, or communication of science, it is vital to inculcate your audience with repetition until products are immediately identifiable and understood.In marketing, a signature product (i.e., brand) is a calling card for a product or company.Typically, signature products should be visual, exciting, and memorable.Appreciating this reality led us to develop one-page signature products for each hazard and loss system.The advantage is instant product recognition; the disadvantage is the need to limit revisions that might impede that recognition.Once the product is popularized, essential elements are difficult to revise.
With ShakeMap, the primary consideration was providing an easily understandable metric for shaking, and the chosen strategy was adopting intensity as the default intensity measure (IM), as mentioned prior.Introducing intensity did require some inculcation based on repeated examples to a wide range of users.At the same time, we were also introducing intensity to the public in California via the DYFI system.ShakeMap's signature product is a portrait-mode, color-coded (contoured or continuous-field) intensity map overlaid on shaded topography, depicting seismic stations as triangles according to the shaking amplitudes.Mountains and population centers provide geographic landmarks.Because ShakeMaps were initially made only in southern California, where the ubiquitous location reference is the freeway, that layer was added and instituted.We also marked the location of Disneyland.Our TV-specific map had simplified labels and legend, but a common media request was to "make it move."Despite the considerable gain in information a ShakeMap provided, our media friends in Hollywood were interested in animating this new product: they often continued portraying magnitude and epicenter with vibrating and expanding concentric circles.Alas.
To provide some context on the technological evolution during the development of these products, ShakeMap, now color-coded based on intensity values, was originally an ASCII map (Figure 10).Readers might be entertained by the reminder that the nature of email and text messaging at the time of ShakeMap's 1998 introduction limited communications to very-low-bandwidth notifications.Early ShakeMap webpages were generated at this time, yet email alerts only contained this low-bandwidth ASCII version.Once we moved to images, the priority became color-coding the intensity maps for more intuitive interpretation.We chose a color-mapping scheme to align with the typical national weather map, which used (and still does), the rainbow palette.
Current scientific standards for color-mapping and more accessible for color-vision deficiency (e.g., Zeller & Rogers, 2020) have led to new color palettes.However, they may be significantly less intuitive for the novice (not well-versed in contour maps).We have decided to stick with the original rainbow palette for consistency with the now widely accepted ShakeMap signature products.We reserve the option of modifying that palette based on continued user feedback.
As mentioned prior, PAGER's dual alerting levels and fatality and economic loss histograms led to the development of its signature product, the onePAGER (Figure 7).However, it is PAGER's one-word alert level that gets most widely reported and that constitutes the users' recognition of the significance of the potential impact.

Effective Marketing & Early Adopters
DYFI took off organically as a popular web-based citizen-science tool, with the number of contributors continuously expanding.Without any real marketing, its adoption was facilitated only by a few of us at the USGS Pasadena field office, mostly by contacting online media outlets after earthquakes to add web links to DYFI and by distributing USGS Fact Sheets and other printed materials.DYFI benefitted from the natural tendency of those who felt an earthquake to want to share their experience.
ShakeMap was also widely adopted organically.It was simply a better way to depict the severity and distribution of the intensity of shaking and potential impact than earlier attempts.In part, this was due to adopting the latest mapping methods, but in addition, the contours and color coding allowed an intuitive understanding of where shaking was potentially damaging.
During the first 5 years of systematic ShakeMap and DYFI deployment, the USGS held numerous user-centered workshops separately for the public, the media, and a wide range of technical users.During this time, the Caltech/USGS-led Earthquake Research Affiliates (ERA) held quarterly briefings on recent earthquakes and emerging technologies.This forum was vital for informing potential product users of the underlying science and the advantages of adopting new tools.The ERA meetings were attended by utility managers, earthquake engineers, and often members of the local news media; the latter were interested in reporting on earthquakes, getting access to the latest information, and developing relationships with the local earthquake experts.These interactions led us to develop media-focused workshops, resulting in a better understanding of the products and widespread use in post-earthquake TV and radio reporting.We even made changes in the color-mapping and formatting of ShakeMap to be more conducive to broadcast television, including a separate NTSC-safe TV map (issues now obsolete and superseded by the widespread adoption of interactive geospatial interfaces within the broadcast media).
The ubiquitous presence of the ShakeMap front and center on the USGS webpages after any significant earthquake led to widespread access and use, particularly in California where it was initially rolled out.In contrast, ShakeCast and PAGER, with more targeted, technical users, warranted the use of early adopters to help spread the word and understand the technology.For ShakeCast, among the key early adopters we met through the ERA meetings was Caltrans bridge engineer Loren Turner.Loren was also recognized by the USGS with the prestigious Powell Honor Award in 2008, which recognizes someone not employed by the U.S. federal government for contributions to the mission of the USGS.Loren ultimately led a long-term USGS-Caltrans collaboration, with Caltrans financially supporting the development of ShakeCast for two decades (e.g., Turner et al., 2009).Caltrans' early implementation of the ShakeCast system, and Turner's internal and external promotion of its value, were vital in getting others to adopt ShakeCast for their utilities, buildings, businesses, insured properties, schools, and other critical facilities (Lin et al., 2020).
Here, too, we followed the tenets of the Diffusion of Innovation (Rogers, 2003) which recommend using familiar jargon-or in our case, familiar formats.Although ShakeCast warrants and uses modern databases, we recognized that most users (typically aided by consultation with civil engineering colleagues to provide structural fragilities for their facilities) used Microsoft Excel as their primary tool for portfolio inventories.Rather than require users to provide their ShakeCast data in database form, we adopted the Excel spreadsheet as the default platform and developed a sophisticated spreadsheet for users to populate and configure their ShakeCast instance with their facilities, fragilities, user contact information, and system configurations.The ShakeCast Workbook thus became the primary interface for ShakeCast users to either configure and run their system or to provide the same to the USGS, facilitating our efforts to run remote ShakeCast instances for key critical users (Lin et al., 2020).
From the beginning, our primary external partner and early adopter for PAGER was USAID.They funded PAGER's initial and subsequent development and became a primary user and backer of PAGER's utility for post-earthquake decision making worldwide.We send all PAGER alerts to the USGS earthquake event webpages and deliver notifications to critical users; of the roughly 600 alert recipients, over 200 are USAID in-country coordinators around the globe.Many recipients are at watch offices (the White House and FEMA, for instance) that redistribute alerts internally; others are aid, response, government, corporate, finance, insurance, and other decision-making agents.USAID's widespread early adoption of PAGER as a primary notification for potential response and Urban Search and Rescue needs facilitated discussions with many other early adopters.

Communicating Uncertainties
All the products mentioned are models, with inherent levels of uncertainty.For example, DYFI intensities are based on individual answers to a macroseismic questionnaire, then averaged over a chosen area, assigned numerically, and weighted to correlate with past expert assignments (Wald et al., 2010).Similarly, ShakeMap is a model and depiction of the shaking field (Wald et al., 2021).With dense seismic and macroseismic observations, we can reduce uncertainties; however, the amplitude and frequency variations of the seismic wavefield vary rapidly in space, as these effects are still spatially aliased to some degree, so we are not fully representing the true variations.
It is also well-known that communicating model uncertainties to non-scientists is challenging (e.g., Pollack, 2005).Describing the uncertainties and the nature of near-real-time hazard and loss models-and particularly with evolving updates-required some patience from the developers and users alike (e.g., Wald et al., 2021).Importantly, for PAGER, we provide uncertainty measures in the form of a histogram of likely losses in ranges of a common logarithm threshold, by which our technical users can gauge the likelihood for the alert to be over-or underestimated (Wald et al., 2011).For fatality and economic loss estimates, we use the median value to determine which range of losses constitutes the alert level-the median loss value is within the chosen alert loss range-but we do not report the median value itself.

Putting It All Together: An Earthquake Information System
It was never apparent in our original scheming that our work would ultimately lead to rapid assessment of global earthquake impact.Nevertheless, the natural course of evolution of these real-time earthquake information products led to precisely that.Actionable pre-and post-earthquake information is most useful in a form that can be easily understood and that is informative enough for users to make critical decisions about activities and use of resources.Just as most in the public, or even decision-makers, cannot simply tie an earthquake's magnitude and location to a complete perspective on the significance of the event, a ShakeMap only depicts how the intensity of shaking was distributed.DYFI adds a critical element: whether the event was widely experienced.However, the exposed population (rather than just the reporting population) requires additional computations to answer the next natural question: How many people experienced each shaking intensity level?This was the goal of the prototype PAGER system.In addition, lossPAGER computed the country-specific human and economic losses based on the exposure per intensity level.
In aggregate, these systems and products described form an Earthquake Information System as depicted in Figure 11.The dependency of ShakeMap on magnitude and hypocenter, for example, and the need for ShakeMap input for PAGER, ShakeCast, and Ground Failure, make it obvious that end-to-end development and internal communications are both fundamental for smooth operations.Such dependencies require interoperability in that changes to one system must be accommodated by downstream products.Note, however, that many external entities make direct use of our product links, feeds, or Application Programming Interfaces (APIs) to access a variety of the product flavors and layers in near-real-time, so we must take care in changing products.Not breaking things for others typically requires that we make our products and formats backwards-compatible-archaic as they may seem-for some time as standards and technologies change.

DYFI
DYFI was immediately informative and popular; in fact, the popularity of DYFI required rapid scaling of our web presence and still sets the bar for the capacity of USGS servers and third-party content caching services.In that initial phase of DYFI, we received an inordinate number of odd reports in the ZIP code 90210, well-known to many worldwide due to the American teen television drama "Beverly Hills, 90210."Peak rates in the first decade of the system reached 50 entries per second.DYFI's collection of personally identifiable information also required Paperwork Reduction Act approval.However, based on the previous macroseismic surveys done by USGS with nearly identical postal questionnaires, we got approval.(Unfortunately, my scheme to ask each contributor if they would like to order a pizza delivered to their home seemed inconsistent with our role as a federal agency.)DYFI has been operating for over two decades  in the United States and nearly 18 years globally.The survey has collected over 6 million individual DYFI intensity reports during that period.DYFI allows for macroseismic data collection at rates and quantities never imagined.High-quality MMI maps can be made almost immediately with complete coverage at a higher resolution than in the past.DYFI also allows for valuable positive interactions between laypeople and a U.S. science agency.Widespread adoption of DYFI and ShakeMap has facilitated the general acceptance of macroseismic intensity, fundamentally improving the USGS's ability to communicate both hazard and risk to the population.DYFI effectively confirms the importance of reporting and of inculcating the public's understanding of intensity and magnitude for a proper perspective of earthquake risk-related decision-making (Celsi et al., 2005).Furthermore, the vast amount of DYFI data allows for rich analyses in otherwise intractable seismological, sociological, and earthquake impact studies, including quantifying the shaking due to induced earthquakes, human response, and risk perception, relating recorded shaking metrics to macroseismic effects, and the attenuation of intensity as a function of magnitude and distance.

ShakeMap and ShakeCast
The report by the National Research Council's ad-hoc Committee on the Economic Benefits of Improved Seismic Monitoring (National Research Council, 2006) addressed the value of seismic monitoring, including ShakeMap and ShakeCast.For example, in Chapter 7, "Benefits for Emergency Response and Recovery," the committee refers to the 1999 M7.1 Hector Mine, California earthquake: The very rapid availability of earthquake source data-including magnitude, location, depth, and fault geometry-provides basic orienting information for emergency responders, essential information for the news media and the public, and input data for other applications and response-relevant products.Maps of ground shaking intensity (ShakeMap) have many important applications in emergency management.Because ShakeMap is available via the internet, all emergency responders at all levels of government and the private sector have access to the same rapidly available information.With this information, responders can quickly assess the scope of the emergency and mobilize resources accordingly.Early reconnaissance efforts can target areas known to have been shaken most severely, and key emergency services including search and rescue, emergency medical response, safety assessment of critical facilities, and shelter and mass care can be expedited based on a more rapid identification of incident location.Monitored information is also useful for rapidly assessing situations in which a large, widely felt earthquake occurs but causes little damage (such as the Hector Mine earthquake of October 16, 1999).Clearly, there are significant economic benefits in scaling a response to the consequences of an event, including no response for an earthquake that requires none.
An initially unanticipated yet ultimately widespread use of our post-earthquake information products came from the financial sector.Post-earthquake financial decision-making has evolved considerably in the past three decades.Today, insurers and reinsurers, private companies, governments, and aid organizations utilize near-real-time earthquake information for loss estimation, financial adjudication, and situational awareness.These financial analyses can significantly benefit stakeholders by facilitating risk-transfer operations, fostering sensible management of risk portfolios, and assisting disaster responders.Ultimately, these improvements can translate to benefits for the public and those at risk (e.g., Franco, 2015;Wald et al., 2021).
Catastrophe bonds and contingency loans can now be triggered via parametric analyses, which depend on earthquake source parameters or shaking estimates and their uncertainties (Wald & Franco, 2017).Such loans are available in six Latin American and Caribbean countries (Collich et al., 2020).Other financial products rely on ShakeMap and PAGER for delivering relief funds within 72 hr of a disaster.There are also direct-to-consumer insurance products that now rely on ShakeMap metrics.For example, one insurance company makes parametric trigger-based earthquake insurance available to individuals in California, with triggers based on those regions that experience over 30 cm/s of peak ground velocity as reported by ShakeMap.
In the earlier years of catastrophe modeling for risk reduction, risk analysts were mainly concerned with risk reduction options through engineering strategies, and relatively little attention was given to financial and economic strategies (Shah et al., 2018).However, creative rapidly triggered microinsurance products could yet be developed that utilize ShakeMap-and PAGER-related parameters to level the playing field for renters and homeowners who cannot otherwise afford standard insurance products (e.g., Shah et al., 2018).

PAGER
Figure 12 maps out the PAGER summary alerts (the maximum of estimated fatality and economic loss alert levels) from September 2010 through the end of 2020, and Figure 13 provides evaluation of the alert levels determined by the PAGER system compared to the alert levels for reported losses for each event (Wald, Marano, et al., 2022).The vast majority of events evaluated, including some large magnitude (M ≥ 7.0) events, result in green alerts, and these assessments provide users with the immediate ability to "stand down."Conversely, any alert in the orange or red category results in a rapid (30-60 min after origin time) "heads up" that action may be necessary.Wald, Quitoriano, Goded, et al. (2022) showed that loss estimates are often improved with the updated alerts, more closely matching actual impacts, and resulting in more accurate alert levels.Overall, alerts from our initial, automatic results correlated well with the reported losses and are thus deemed highly useful.Few events are off by one alert level, and only 2 out of 6,259 events (0.03%) missed the mark by two levels.

Ground Failure Estimates
As of this writing, our near-real-time estimates of earthquake-triggered landslide and liquefaction have only been in operation for a few years.However, in that time period, the GF product has been produced for 320 events, with 25 resulting in elevated hazard or exposure to landslides and 47 for liquefaction (Allstadt et al., 2022).In a qualitative comparison between the GF product alerts and GF occurrence information, we found that the product succeeds at assigning appropriate alert levels in most cases (Allstadt et al., 2022).It has often been used to inform reconnaissance (e.g., Thompson et al., 2020;Zimmaro et al., 2020) and is depicted and distributed widely in scientific circles, such as the Landslide Blog (e.g., Petley, 2020).

Open-Source Development, Data, and Models
The Office of Management and Budget (OMB) and Office of Science & Technology Policy (OSTP) requirements that federal science remains openly available and in the public domain turns out to be hugely beneficial rather than problematic.Likewise, that government employees cannot directly benefit from their deliverables is also beneficial in several ways.Each hazard and loss model component needed for these real-time earthquake information systems has been made publicly available.National and global risk models are often proprietary, so the open-source data and tools developed during our product developments were widely cited and utilized.The online version of Slab1.0 (Hayes et al., 2012) and now 2.0 (Hayes et al., 2018) are particularly well used, as is the topographic slope-based global Vs 30 grid.The Global Earthquake Model (GEM) Consortium's OpenQuake risk model uses the PAGER-developed inventory as a default for regions without better constraints.Moreover, the ShakeMap Atlas has found users around the globe, eyeing historical ShakeMaps and using them for loss calibration and earthquake forensics of many flavors.Their wide use is facilitated by the open distribution of these products and their underlying models and datasets.The ShakeMap software itself-freely available on GitLab-has been adopted by many seismic networks around the world.The operators of those networks look to us not only for key research and development, but also for ongoing operational support.

Challenges
The wide-ranging uses of our products-including by financial, response, and aid decision-makers-and their wide exposure via the media place additional responsibility on us, the producers of these information systems.To a large extent, the advancement of post-earthquake financial instruments has been facilitated by the availability of rapid and accurate earthquake parameters and more quantitative geospatial hazard information.However, commensurately, USGS products like ShakeMap and PAGER have evolved to further accommodate specific financial sector requirements.We described the need for proper metadata, documentation, versioning (of  7 for the final version PAGER results for reported versus median estimated fatalities (left) and economic impact (right).After Wald, Quitoriano, et al. (2022).software, events, and products), and archiving as essential ingredients for definitive catastrophe bond triggers (Wald & Franco, 2017), requiring us to document ShakeMap policies and procedures that impact such large financial instruments (Wald et al., 2021).
There is a fundamental tradeoff between speed and accuracy in rapid shaking and loss estimation.This tradeoff is true of magnitude and hypocentral location, as well as every product downstream from those derived parameters.Like magnitude and location, all earthquake parameters and products, including ShakeMap, PAGER, and ground failure probabilities, are not raw data; they are models, and we know all models are imperfect models, and as such, they are uncertain and there may be alternative (sometimes better) solutions.In general, earlier models are more uncertain; our strategy has been to update quickly and frequently to accommodate new data and converge on best estimates.We adopted one key protocol in the early years of PAGER alerting.We make ShakeMap shaking and PAGER loss estimates rapidly and automatically, independent of human review in most cases.Color-coded PAGER alerts are automatically delivered, except for initial orange or red alerts.We provide "pending" alerts until the PAGER team can verify that the earthquake source parameters-primarily the magnitude and depthhave stabilized.
The significant improvements in alerting accuracy resulting from evolving source parameters and data constraints imply that many of the initial uncertainties stem from the hazard components of the model rather than the loss estimates.More detailed analyses of individual and country-specific event collections indicate loss-model deficiencies that would benefit from reexamination.In particular, the data from recent years, which are not yet in our models, are key because the ShakeMaps are more robust, the losses are better quantified and reported, and the population datasets are more accurate than for earlier events.A caveat is that reported losses-particularly those at the highest ranges in countries overwhelmed by response-are also uncertain.And, at the lower end of the impact range, losses are sometimes not readily or openly reported, so a lack of loss data does not necessarily mean that there was no damage.
The performance goal for PAGER is that median loss estimates would have a half order of magnitude accuracy, which means occasionally being off by that much (although alerts are less frequently "off").Wald, Marano, et al. (2022) provided a 10-year retrospective on the accuracy of PAGER alerts.Though we do not update the loss models PAGER employs in near-real-time, source and ShakeMap input updates can significantly affect PAGER loss estimates.Further, operational challenges and mistakes can contribute to additional embarrassments when improper earthquake magnitude or depths are released, only to be recalled or updated.Such errors are not part of the formal PAGER uncertainties, since we calibrate against final earthquake catalog parameters.Nonetheless, Wald, Marano, et al. (2022) show reasonable ranges of alerts compared to actual losses for 2010-2020, with final alerts converging on actual losses better than initial alerts.
It is challenging to accurately predict fatalities for borderline damaging events with no or low fatality.For events with few fatalities, casualties are nearly random, depending on the collapse of a wall or perhaps one or two buildings, among many shaken.However, one fatality likely indicates many more injuries, so response is essential.For this reason, many users select alerting based on yellow alert levels, indicating the likelihood of either one or more fatalities or over $1M in losses.
The accuracy of PAGER's alert levels is not the only way to evaluate PAGER's usefulness.The basic contextual information provided on the onePAGER (Figure 7) summary adds valuable situational awareness, and the combination of shaking intensity and population-and their spatial variations-provides essential context for scientific and engineering analyses, as well as response and financial decision-making.Nonetheless, it is fundamental that we reevaluate and improve the accessibility, delivery, and presentation of all near-real-time earthquake information materials.Likewise, every time a new generation of the Atlas ShakeMaps is generated due to algorithmic updates, or when several additional years of ShakeMaps are vetted, PAGER and the Ground Failure models need to be recalibrated.Thus, the updating process is ongoing.
Mother Nature tends to provide reasonably long gaps between damaging earthquakes.Despite this, the arduous task of responding to global events by my colleagues and me has been challenging over the last few decades.Being dedicated to the success of these systems and the accuracy of their shaking and impact estimates is challenging because, statistically, they are guaranteed to be wrong sometimes, which is frustrating and potentially embarrassing.But initially inaccurate does not necessarily mean unhelpful.Rapid updates of all these systems mean that uncertainties or mistakes can be corrected and reposted early on.Major earthquakes seem to occur in the middle of the night, on weekends, or during vacations more often than fairness and statistics dictate.Nevertheless, the rewards of providing these valuable services, and the feedback provided by users over the years, make it all worthwhile.

Looking Forward
Advancements in hazard and consequence modeling form the core of the USGS's strategy to deliver rapid earthquake shaking and loss estimates.One primary goal is to improve the operational capabilities of the USGS National Earthquake Information Center for responding to earthquakes around the globe.We continue to compile, develop, and refine key openly available models and datasets that contribute to calibrating these systems and the information they produce.In addition to near-real-time products, the science, software, and datasets behind these systems continue to advance studies of earthquake shaking and its impact by the seismological, engineering, financial, and risk modeling communities.In return, feedback from product consumers has been the primary driver of our scientific innovations and product development efforts.Many of the innovations described herein resulted from realigning scientific priorities to accommodate users' desires for new types of content in more-informative formats.
Another critical aspect of the product integration and development is leveraging earthquake-hazard and loss-modeling science done internally (within the USGS) and by external researchers and collaborators.In parallel to the developments described, we benefitted from and often collaborated with partners and colleagues around the globe.Many of the tools we used depended fundamentally on datasets and algorithms developed by others, such as building damage and fragilities and GMMs, often funded by the USGS, National Science Foundation, and other federal agencies.Large efforts continuing in parallel to ours are too numerous to summarize.Of note, however, Erdik et al. (2014) outlined rapid earthquake loss assessment efforts by colleagues at GEM as well as real-time earthquake information produced in Japan, New Zealand, Italy, and Taiwan.
We are continuing efforts to focus on a few critical gaps not quite tractable with our current tools and technologies.For example, advances in remote sensing, rapid in situ impact reporting, and machine learning-combined with new datasets such as global building footprints-may allow for innovative data-fusion strategies.These approaches can integrate with existing models and could significantly improve the accuracy and spatial resolution of our shaking and loss estimates.Our efforts build on earlier geospatial strategies that use satellite imagery to improve post-earthquake damage quantification needed for Post-Disaster Needs Assessments (PDNA; Loos et al., 2020).I do not address the rapidly expanding realm of Earthquake Early Warning (EEW), both in the United States and worldwide, because those systems are not in my purview.However, the natural course of evolution of the USGS Earthquake Information System could include integration of EEW with continuously updating earthquake information products, starting at the time of the earthquake, and following with accurate shaking depictions, rapid impact assessments, and detailed loss information that can be used for recovery (Wald, 2021).
We recently described two strategies underway aimed at updating uncertain ground failure and loss models (Wald, Xu, et al., 2022).The first uses reported fatalities to update PAGER fatality estimates (Noh et al., 2020).The second strategy utilizes satellite imagery (NASA's damage proxy maps) within a model updating strategy that uses a Bayesian Causal Graph to determine where and what specific earthquake processes-shaking, landsliding, or liquefaction-contributed to post-earthquake image changes (Xu et al., 2022).Our initial results using both approaches indicate (a) that updating the PAGER fatality model can prevent cases where PAGER losses are initially significantly off (e.g., the wrong alert level) by quickly allowing updates and (b) that the imagery-while slower than ground-truth observations-provides more spatially accurate impact assessments, well beyond the capabilities of the generalized loss and ground-failure models.
These types of updating strategies benefit from ongoing research and development, additional case histories, and working out communication and operational considerations.A significant gap in the use of imagery for building loss assessment remains, in that our PAGER loss models are computed and provided only in aggregate form (total losses).The most practical use of the causal graph strategy with satellite imagery requires an a priori model of the location of buildings impacted, even if imperfect.We may accomplish this with a revised implementation of the PAGER semi-empirical model, which includes estimates of losses for different building types.The facility with which we can do this depends on the country of interest: some national building-loss models are better quality than others.Ultimately, it would be highly beneficial and worthwhile to improve and incorporate these capabilities into the operational earthquake response toolkit at the National Earthquake Information Center.Optimistically, satellite imagery, particularly Interferometric Synthetic Aperture Radar, is expected to become more readily available with the launch of the NASA-ISRO Synthetic Aperture Radar mission in 2023 and parallel efforts in the commercial sector (The Economist., 2022), making these data more suitable for rapid post-event imagery.Some of our current challenges may be alleviated over time.For instance, more ubiquitous, crowd-sourced macroseismic assignments and denser seismic instrumentation (both free-field and within structures) could gradually reduce uncertainties in shaking hazard and, thus, loss estimates.Recalibration of our loss models is ongoing as more events occur.We are planning to revamp the PAGER loss functions based on the new ShakeMap Atlas (Version 4, online) covering the latest decade of ShakeMap and loss data, which have better constraints than the historical events we used previously.Explicitly including ground failure loss estimates in PAGER loss calculations could bring loss estimates better in line with actual impacts where these secondary hazards contribute significantly above and beyond shaking-induced impacts.

Improving Access and Equity
Better recognition of the many challenges within disaster equity (e.g., Douglass & Miller, 2018) requires more consideration of the accessibility of information products and how we and our users might better use them for more equitable response, recovery, and planning (e.g., Loos, 2021).We are analyzing what we might we do to alleviate barriers to entry for USGS earthquake information products, and how might we better inform critical users in their efforts to bring equitable response and recovery to those affected worldwide.These concerns are now front and center as we move toward addressing our role in reducing disaster inequities (Loos, 2021).
California has more ShakeMap-suitable strong-motion seismic stations than other parts of the United States.Their quality and density assure well-constrained, near-real-time shaking estimates and better forensics after damaging events to learn engineering and scientific lessons, as well as offering the potential for earthquake early warning.California is home to fully three-quarters of the annualized economic seismic risk in the United States (Jaiswal et al., 2017.Naturally, risk in California warrants better station coverage and mitigation efforts than elsewhere in the Nation.However, many portions of the Nation with significant (albeit lesser) risks are rather poorly covered with seismic instrumentation; thus, ShakeMaps in those areas rely on DYFI data and shaking inferences (Worden et al., 2020).Various cost-benefit considerations can be used to weigh alternative needs among the other urban social, education, healthcare, and alternative disaster mitigation priorities.
Worldwide, such sociotechnical disparities are even more apparent.Several nations have dense, real-time seismic networks capable of informing post-earthquake response.These include Japan, Italy, New Zealand, Taiwan, and more recently China.Several large metropolises are also well instrumented, such as Istanbul and Mexico City.An indication of the limited availability of regions with sufficient station coverage, and openly available data, are those areas capable of supporting second-generation earthquake parametric triggered catastrophe bonds (Franco, 2015) Dense station coverage would be needed to secure such bonds, and a few locales worldwide (Tokyo, Istanbul, Mexico City, Los Angeles, and San Francisco) have such station density.Nevertheless, many countries with the most significant seismic risk (Silva et al., 2020), such as Haiti, have minimal seismic instrumentation (Calais et al., 2022).A more ubiquitous use of inexpensive (micro-electromechanical systems, or MEMS) accelerometers provides an option for filling in instrumentation gaps as standard seismometers and running a seismic network are costly (e.g., Calais et al., 2022).
Further, countries with limited station coverage often have low participation in DYFI and other citizen-based science felt shaking reporting (e.g., Bossu et al., 2018)-potentially due to limits of internet access, education, language, and economic factors (Hough & Martin, 2021).Such countries are effectively blind spots for ShakeMap, limited only to predicted ground shaking.While predicted shaking is informative (Worden et al., 2020), shaking estimates are no substitute for station ground-truth constraints and rapid macroseismic reporting.Naturally, unequal availability of technologies impacts populations worldwide, affecting access not only to such citizen-science participation but also to banking, healthcare, employment opportunities, and many social services.Although we are wed to existing technologies, with their inherent accessibility limitations, there are opportunities for improvement.For instance, we are currently performing demographic and internet analytics to examine why DYFI response rates, when corrected for observers' intensities and population density, vary greatly worldwide.Based on those results, we aim to determine what correlative factors are limiting response and propose outreach and technical improvements for increasing DYFI access and equity in its use worldwide (Wald, Quitoriano, Goltz, et al., 2022).We have been working to increase accessibility of DYFI, primarily by supporting more languages and by outreach via the media and social media after events of note worldwide.
Of course, disparities among seismic networks contribute to inequity, but risk due to shaking is primarily dominated by the more intractable vulnerabilities of local building stocks.Differing response efforts (in terms of both local response and international aid) can significantly compound those losses and risks.Further, financial backstops in savings, insurance, access to aid, and communities' coping capacity often compound inequities.
In the short term, continuing to improve science communication to the populations most at risk would be important to better describe risk and describe strategies that could reduce risk.The use and understanding of macroseismic intensity provide an alternative to the concept of earthquake magnitude, which can be confusing to the public (e.g., Celsi et al., 2005).Disproportionate attention trying to explain the meaning of an earthquake's magnitude takes away from more intuitive description of shaking intensities.An ongoing effort to harmonize Modified Mercalli Intensity used in the United States with the more modern EMS-98 (European Macroseismic Scale) developed for Europe is a vital step toward developing an International Macroseismic Scale (IMS, Wald, Quitoriano, Goded, et al., 2022;Wald et al., 2023).Developing and instituting an IMS would greatly facilitate standardization of post-earthquake damage data collection and intensity assignments, and importantly, allow for a more uniform presentation of earthquake information the world over.

Final Words
I have described the primary ingredients and innovations that led to more robust, timely, informative post-earthquake information systems.Strategically, many products suitable for real-time applications have substantive added benefits by contributing to earthquake mitigation efforts through scenario planning exercises and a more quantitative general understanding of the impacts of both past and potential future events.The processes of such scientific and product development described also provide lessons about the success and challenges associated with the diffusion of these innovations among a wide variety of users and uses among the earthquake response and management, financial decision-making and aid, and related humanitarian efforts.
Along the way, we took the "working backward" approach to iteratively developing a user-centric design for earthquake information products.We made a solid attempt to use familiar jargon and to introduce new technologies first to early adopters who then became change agents.We also built solutions to problems through multidisciplinary collaboration and leveraging the foundational work and reputations of others while recognizing and filling scientific and practical gaps to success.
A pivotal early lesson for me was the importance of not striking out alone, or in a vacuum.Caltech's Seismo-Lab coffee hour was a way to throw out an idea and find out if it (a) was already solved decades ago, (b) was unsolvable, or (c) could provide insight or a solution to a problem that was either interesting or important.Before I dig deep into a problem for a year or two, it is nice to figure out if it is a practical problem to tackle.Picking my battles wisely was a key lesson, and one I'm still learning.[My publication record is exceeded substantially by my never-to-be published draft manuscript collection of abandoned projects].
I have also recognized over the years that having a good idea is only about 5% of the challenge: implementation is the other 95%.Each product I have worked to develop has proved this to be the case.Vince Quitoriano helped develop the initial versions of both ShakeMap and DYFI.However, the long, hard work of further development, maintenance, and operations of these systems required considerably more staffing and expertise.Nevertheless, the early successes of ShakeMap and DYFI paved the way for resources to continue those efforts, as well as expanded institutional flexibility to explore other creative solutions, ultimately resulting in multiple downstream tools, including ShakeCast, PAGER, and recently, the Ground Failure system.
Recalling and describing our success and challenges along the way and the many interactions and collaborations has been a wonderful, nostalgic tour.For me, these collaborations and developments constitute the most rewarding period of my scientific career.Well, so far.

Figure 2 .
Figure 2. Vs 30 map for California with regression kriging and including Vs 30 measurements.From Thompson et al. (2014).

Figure 3 .
Figure 3. Scenario ShakeMaps intensities for potential earthquakes in regions around the United States.The intensity color-coding is shown in Figure 1.The underlying basemap is the U.S. National Seismic Hazard Model of Petersen et al. (2020).

Figure 4 .
Figure 4. Historical earthquake ShakeMap intensities for the Great 1906 M7.8 San Francisco earthquake (left; circles are macroseismic observations) and the 2011 M9.1 Tohoku, Japan earthquake (right; triangles are seismic stations).Maps are not at the same scale.

Figure 5 .
Figure 5. Ingredients of PAGER national, empirical fatality model development: ShakeMaps for historical earthquakes, population and losses for each event.Lower left image is from Jaiswal and Wald (2010).

Figure 6 .
Figure 6.PAGER fatality rates as a function of Modified Mercalli Intensity for selected countries.Canada & U.S. reference is for the western United States, but not California.Modified from Jaiswal and Wald (2010).

Figure 7 .
Figure 7. PAGER final alert example for the 30 October 2019, M7.0 earthquake affecting Greece and Turkey.The summary alert (top, orange) is the higher of the fatality and economic alerts (bold outlines) which contain the median estimate.Reported losses were 119 fatalities and $400M, mainly in Turkey(Akinci et al., 2021).

Figure 10 .
Figure 10.ASCII version of ShakeMap in 2003.The epicenter was depicted with an asterisk; and numerical values indicate the intensity as each location.

Figure 11 .
Figure 11.Earthquake Information System depicting a subset of USGS earthquake information products and their dependencies.Solid white lines show required inputs and dashed lines indicate optional inputs.

Figure 12 .
Figure 12.Final PAGER summary alerts from September 2010 through the end of 2020.Color-coded alerts are the maximum of estimated fatality and economic loss alert levels shown in Figure 1.Symbols are scaled in size by alert level and higher alerts are on top, but not all event alerts are visible due to the large number of events.

Figure 13 .
Figure 13.Comparison of reported versus median losses from September 2010 through the end of 2020.Color-coded boxes represent the PAGER alert ranges as indicated in Figure7for the final version PAGER results for reported versus median estimated fatalities (left) and economic impact (right).AfterWald, Quitoriano, et al. (2022).