Detailed data is welcome, but with a pinch of salt: Accuracy, precision, and uncertainty in flood inundation modeling



[1] New survey techniques provide a large amount of high-resolution data, which can be extremely precious for flood inundation modeling. Such data availability raises the issue as to how to exploit their information content to effectively improve flood risk mapping and predictions. In this paper, we will discuss a number of important issues which should be taken into account in works related to flood modeling. These include the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; computational efficiency; false confidence that can be given by high-resolution outputs, as accuracy is not necessarily increased by higher precision. Finally, we briefly review and discuss a number of existing approaches, such as subgrid parameterization and roughness upscaling methods, which can be used to incorporate high detailed data into flood inundation models, balancing efficiency and reliability.

1. Introduction: New Databases for Modeling Physical Processes

“…essentially, all models are wrong, but some are useful.”

(George E. P. Box)

[2] The development of new terrestrial and remote survey techniques provide a huge amount of high-resolution data for a wide number of climate, meteorological, and hydrological applications. Together with the constant advances in computational resources, these data sets are radically changing the use of computer models, increasing their complexity and range of applications [Beven, 2007]. For instance, meteorological and climate models can now be applied at kilometric grid scales across most of the globe [Daly, 2006], allowing a better representation of the effects of spatial heterogeneity on modeled dynamics [Wood et al., 2011]. Global flow routing models are also taking advantage of satellite observations regarding the storage and movement of surface water on the global scale and are now able to simulate processes, such as floodplain inundation dynamics [Yamazaki et al., 2011].

[3] Flood inundation models are a good example of this general trend. In recent years, traditional limitations given by the scarcity of detailed/accurate topographic data and the high demand of computational resources have been overcome [Bates, 2004; Di Baldassarre et al., 2011], and there has been a proliferation of scientific studies in this field (see Hunter et al. [2007] for a detailed review). Application to the urban environment could especially benefit from new topographic data sources, such as laser altimetry (e.g., light detection and ranging (LiDAR)), aerial photogrammetry, and synthetic aperture radar (SAR) interferometry. These data sets can provide digital elevation models (DEMs) of 1 m resolution, or finer, with high horizontal and vertical accuracy (currently vertical errors are approximately 10–20 cm [Fewtrell et al., 2011; Neal et al., 2011a; Chen et al., 2012]).

[4] The availability of high-resolution data raises a key issue: How can we make the most of their information content, to balance model complexity, efficiency, and reliability? On this question, some authors have suggested mitigating overexpectations of realism and accuracy related to the use of high-resolution data and models. Daly [2006] cautioned over the tendency to equate resolution with realism in climate modeling, as climate-forcing factors assumed to be unimportant at coarser resolutions may become significant if the scale is refined. Guentchev et al. [2010] pointed out that developers of spatial climate data sets should always communicate the strengths and limitations of their data sets. Other papers document the propagation of data errors through spatially distributed hydrologic models, with the result that finer spatial resolutions have the potential to magnify the errors rather than smoothing them out as some of the physical processes neglected at larger scales may then have a dominant role at smaller scale [e.g., Beven, 2000; Bloeschl, 2006].

[5] Similar considerations can be made in the field of flood inundation modeling, which is the focus of this paper. In fact, in our opinion, the extreme precision of highly resolved models and data sets may lead nonexpert users to become overconfident in the model results, disregarding a number of issues that do have major importance in performing reliable flood analyses. We briefly discuss here these issues and review the existing approaches, which can be used to incorporate high-resolution data into flood inundation models.

2. Accuracy and Precision: Uncertainty in Model Structure

[6] A review of recent literature addressing flood inundation modeling shows some established trends. Especially when dealing with urban flooding, a common approach consists of refining the grid resolution according to the available topographic data. Different motivations can be found for such approach. It is generally accepted that model resolution should be sufficiently refined to minimize errors in the schematization of the physical processes, such as the effect of topographic forcing on flow processes [Schubert et al., 2008]. Also, coarse meshes can induce errors due to numeric diffusion, smoothing out inertial effects [Hunter et al., 2007] which can be relevant especially in urban areas or in high-energy flow conditions [Soares-Frazão et al., 2008; Gallegos et al., 2009].

[7] In practice, especially for applications in urban environments, the optimal grid scale is still under discussion and different opinions on this issue have been given. Several authors [Schubert et al., 2008; Fewtrell et al., 2011, Schubert and Sanders, 2012] have agreed that mesh size should be related to the average dimension of buildings and roads in the test site. Fewtrell et al. [2008] and Gallegos et al. [2009] indicated a resolution of 5 m as the upper threshold, while according to successive studies, further mesh refinement (2 m or less) is necessary to represent small-scale features (e.g., narrow streets, road cambers, and curbs) and their influence on water depth and velocity distribution [Neal et al., 2011a; Fewtrell et al., 2011; Schubert and Sanders, 2012]. Considering current literature standards, in this paper we will define “low” or “coarse” resolution, a mesh of 20 m or coarser (considering average cell size in case of unstructured grids); “high,” or “fine,” a mesh resolution between 5 and 2 m; and “very fine,” a mesh resolution below 2 m.

[8] In our opinion, the message that seems to emerge in many current research works is that “more information (in terms of mesh resolution and topographic detail) will result in better model performance.” We believe that this approach, which can be termed “reductionist,” may sometimes be misleading, and generate confusion between the concepts of accuracy and precision. In the field of hydraulic modeling, model precision can be related to the resolution of the computation grid, where the variables of interest are computed, and the detail of governing equations. On the other hand, model accuracy is defined as the ability of the model to correctly reproduce the variables of interest, for instance, an observed flood extent map.

[9] The two definitions only partially overlap. A certain level of precision is of course necessary for model reliability, but beyond some limit (depending on the case) an increase of precision does not necessarily imply greater accuracy. Different research works on flood models have shown that there is a minimum resolution, below which any further mesh refinement does not improve significantly the results [Gallegos et al., 2009; Fewtrell et al., 2011; Neal et al., 2011a]. Actually, resolution should be chosen in relation to model structure and complexity, which always have limitations.

[10] For instance, models not including inertial terms into governing equations have a low sensitivity to small-scale features, meaning that close field flow processes are smoothed out even when high-resolution meshes are used [Hunter et al., 2007; Neelz and Pender, 2010; Aricò et al., 2011; Dottori and Todini, 2012]. With these models, the use of refined meshes may be useful to capture topographic forcing, but improvement in accuracy can be lower than expected. On the other hand, accuracy of zero-inertia models is comparable to models based on fully dynamic shallow water equations when gradually varying, subcritical flows characterize the flood event (for a detailed discussion see Aricò et al. [2011], Neal et al. [2011b], and Prestininzi et al. [2011]).

[11] As a matter of fact, experiments regarding the interaction of high energy flows with obstacles showed that 2-D models (even based on fully dynamic and shock-capturing schemes) cannot reproduce the details of flow processes, which are inherently 3-D, although general flow processes are well reproduced [Soares-Frazão et al., 2008; Guinot, 2012]. Also, a number of works pointed out that flow in road junctions should require 3-D modeling to be well represented [Mignot et al., 2006].

[12] Finally, it is important to note that finer mesh resolution would require a more complex parameterization of the model, in particular of flow resistance. For instance, in urban areas wall roughness [Chen et al., 2012] and detailed parameterization of localized head losses [Soares-Frazão et al., 2008; Sanders et al., 2008; Guinot, 2012] should be considered. The choice of resistance parameters is a major source of uncertainty [Schubert et al., 2008; Schubert and Sanders, 2012] as it can affect flood extent, timing of flood waves [Gallegos et al., 2009], and localized processes such as location of hydraulic jumps [Mignot et al., 2006]. Moreover, resistance parameters are influenced by model structure, as they account for turbulence effects and dispersion processes not explicitly represented [Romanowicz and Beven, 2003; Di Baldassarre et al., 2011; Dottori and Todini, 2012; Aricò et al., 2011].

3. Data Uncertainty: Can We Model Everything?

[13] Apart from uncertainties and limitations in model structure, further uncertainty sources can affect data sets for model building and influence the simulation of flow dynamics [Romanowicz and Beven, 2003]. We will try to summarize a number of these here, including those typical of urban environments, where high-resolution data are mostly used. While some of these uncertainty sources are generally addressed in research works (such as topographic errors and, less often, boundary conditions), so far other issues have been far less considered and analyzed.

[14] 1. Despite the great advances in survey techniques mentioned in section 'Introduction: New Databases for Modeling Physical Processes', topographic data may still be a relevant source of uncertainty when data from ordinary survey techniques are used as geometrical input. On this point, Aricò et al. [2011] observed that sensitivity to parameter errors depends on model complexity, zero-inertia models being less sensitive.

[15] 2. Boundary conditions, in particular inflow, are a well-known source of uncertainty [e.g., Brandimarte and Di Baldassarre, 2012]. In flood events due to dyke or river bank overtopping, reconstruction and location of flooding hydrographs is difficult, and the related uncertainty may severely affect all the resulting simulations [Romanowicz and Beven, 2003; Bates, 2004; Mignot et al., 2006; Masoero et al., 2012]. Also, river bank overtopping may be increased by debris buildups in bridges [Neal et al., 2009].

[16] 3. In urban flood events, reproduction of the complex interactions between subsurface drainage network and surface flow is another difficult task. Different authors have investigated this issue using models that couple the sewer system with the surface flow [Hsu et al., 2000; Smith, 2006; Gallegos et al., 2009; Neelz and Pender, 2010]. However, during flood events drainage systems rarely work under optimal condition and may be subject to unpredictable local failures such as obstruction of manholes and pipes (we believe it difficult, if not impossible, to make any sensible exact prediction of the time lags between the servicing of the sewer system and the occurrence of major flooding). As a result, urban flooding may occur due to the combined effect of sewer surcharging and surface flooding, adding further uncertainty to reconstruction of flooding mechanisms, as mentioned in point 1 [Neal et al., 2009].

[17] 4. During flood events, water flows typically interact with different small-scale features, both fixed and moving. These include draining ditches, small embankments [Wright et al., 2008; Bates et al., 2006; Hailemariam et al., 2013], and walls [Yu and Lane, 2006a] in rural areas; cars, fences, [Mignot et al., 2006], road cambers, and curbs [Fewtrell et al., 2011] in urban landscapes. Some features related to microtopography may be included in model grid when high-resolution data are available [Yu and Lane, 2006b; Fewtrell et al., 2011], while the effect of vegetation can be represented through resistance parameters, based on terrain heights [Schubert et al., 2008]. On the contrary, minor fixed obstacles, cars and other vehicles are much more difficult to reproduce and are not generally considered in model grid, as they would be impossible to characterize. However, the problem of their influence on local flow conditions should be considered, especially when very fine mesh resolutions are used (i.e., 2 m or less). For instance, cars may partially obstruct narrow streets and contribute to forming debris roundups that can affect overall flow processes [Mignot et al., 2006]. This is especially dangerous in urban flood events characterized by a high energy flow, as demonstrated by the recent catastrophic flood event of November 2011 in Genoa, Italy [Cavallo et al., 2012] (and, again, we believe it difficult, if not impossible, to predict where cars will be parked at the time of flooding).

[18] 5. In urban areas, the interaction of buildings with flow processes is complex. Building walls act as impervious obstacles, modifying and deflecting flow path [Chen et al., 2012; Schubert and Sanders, 2012]. On the other hand, as flooding progresses buildings also behave as porous media, as water normally enters inside buildings and fill them, producing levels that tend to be similar to outside values [Mignot et al., 2006; Schubert et al., 2008; Dottori and Todini, 2012]. Therefore, their representation in model grid is not straightforward as both these processes should be considered.

[19] 6. Especially in high energy flow conditions, transport and erosion processes, like debris buildups, scour, damage, and collapse of buildings, can modify the configuration of the study area and affect flow dynamics [Mignot et al., 2006; Gallegos et al., 2009].

[20] In our opinion, all the possible and case-specific sources of uncertainty should be at least mentioned and, when necessary, discussed and quantified, using specific strategies for their evaluation and quantification. Some of them, such as DTM errors, can be quantified in many cases. Alternatively, the analysis of multiple scenarios could be a good solution. For instance, Mignot et al. [2006] analyzed several flood scenarios in a urban environment, taking into account the variability of inflow hydrographs, downstream boundary conditions, water storage inside large buildings and yards, influence of roughness, and effect of debris buildups. Schubert et al. [2008] evaluated a method of assessing resistance to flow considering terrain heights, and assessed the inflow variability. We think these analyses should be more applied in flood inundation modeling, integrating already established methods to assess model uncertainty, such as sensitivity analysis.

4. The Problem of Model Evaluation

[21] The evaluation of model results is currently one of the most crucial issues in flood inundation modeling. In the scientific community, there is still no broad consensus on the reliability and accuracy criteria for hydraulic models [Hunter et al., 2007]. However, it is recognized that calibration and validation of any 2-D hydraulic models do require data of different types [Apel et al., 2009], with adequate spatial and temporal coverage to: (i) enable the investigation of all the variables of interest (e.g., flood extent, water depth, velocity, field, time of occurrence, duration of the event) and (ii) discriminate between different model structures [Hunter et al., 2007]. The problem is that we are still unable to quantify the dimension a data set of observations should have to achieve these objectives. In fact, while topographic data have reached high levels of detail and accuracy and can easily be found in many areas, the current flow of observed data is much smaller [Apel et al., 2009]. In addition, observed data sets are generally affected by relevant uncertainty, especially in flood events involving urban areas. For instance SAR imagery, which are currently used to derive flood extent maps, are difficult to process in urban areas [Neal et al., 2009], although progress has recently been made [Giustarini et al., 2012; Mason et al., 2012]. Aerial images are generally more reliable, but less often available as they must be obtained by means of a site-specific survey [Schumann et al., 2011].

[22] Measurements of high water marks and water wracks can offer a valuable alternative, but the spatial distribution of this type of ground data and the related uncertainty—these data typically have a vertical accuracy around 50 cm [Neal et al., 2009; Horritt et al., 2010]—still leave a significant degree of uncertainty in model results even in data-rich events. As a matter of fact, highly accurate flood maps, either from ground survey [Gallegos et al., 2009], or high resolution SAR images [Giustarini et al., 2012] are still rare, and often resolution of evaluation data is much lower than model resolution [Di Baldassarre et al., 2011]. Finally, observed data generally allow for evaluating only some variables of interest, such as water depth and flooding extent while, for instance, velocity field is difficult to estimate and point measurements are rarely available [Schubert and Sanders, 2012].

[23] In view of this framework, care is required when high resolution outputs are compared with low resolution and/or scarce evaluation data, as these data sets often only allow for a partial evaluation of model performance. A few exceptions to this general framework can be found in the literature. In a number of data-rich sites, researchers were able to use a series of flood images through time [Bates et al., 2006; Horritt et al., 2007; Wright et al., 2008; Neal et al., 2011a; Schumann et al., 2011], multiple observations of maximum water levels [Neal et al., 2009], or a combination of these data with flood extent map and/or stream flow data [Hunter et al., 2005; Werner et al., 2005; Apel et al., 2009; Gallegos et al., 2009]. Although the findings of these works were not presented as general rules of thumb, in these cases the amount of data was sufficient to identify optimal grid resolution [Gallegos et al., 2009; Neal et al., 2011a]; to determine appropriate model structure and complexity [Werner et al., 2005; Apel et al., 2009; Prestininzi et al., 2011]; to assess observation errors [Neal et al., 2009]; and provide information on flood dynamics [Bates et al., 2006; Schumann et al., 2011].

5. Do We Really Need 1 m Resolution in Flood Mapping? Would We Learn Something?

[24] Even when theoretical research works are carried out, modelers should always bear in mind the practical use of their results. Hydraulic models can produce flood inundation maps with extremely high precision, but these outputs need to be aggregated to a coarser scale to obtain readable maps that can be useful for, say, evacuation plans or risk assessment. Indeed, this loss of modeling detail can be advisable, as in our opinion the use of too high resolution outputs can generate a false confidence on obtained results. This can push modelers and end users to propose, and require, a deterministic flood mapping approach, disregarding the underlying sources of uncertainty in models and data sets, instead of using models as communication tools to promote transparent and participative flood management processes [Bloeschl, 2006]. In this sense, we believe modelers should put special care into communicating uncertainty, according to the different needs of end users (e.g., general public, policymakers and public administrations, stakeholders).

[25] Moreover, although limitations in computational resources are progressively relaxing, a careful selection of mesh size and modeling detail is still crucial to improve computation efficiency. Halving the grid size, processing time may rise by about a factor of 10 (R. Price, personal communication, 2012). Schubert et al. [2008] reported an increase in run time from 200× to 400× when going from 5 to 0.8 m resolution, depending on the model formulation; comparable values were reported in other studies [Neelz and Pender, 2010; Neal et al., 2011a]. On the other hand, even the most recent parallelization techniques give a speed up of a factor between 1/10 and 1/100 [Neal et al., 2010]. Appropriate choice of grid size is particularly important in applications where run time is crucial factor, for instance to integrate 2-D models in real-time flood forecasting system. More in general, we suggest that available computational resources should be used to run multiple simulations at appropriate mesh resolution to evaluate uncertainty sources, as discussed in section 'Data Uncertainty: Can We Model Everything?'.

[26] Finally, it should be considered that the availability of high-resolution topography is still limited to specific areas in developed countries. For the majority of urban areas worldwide, only relatively coarse data can be used for model building. Even where highly detailed topographic surveys are available, their direct use into high-resolution grids can be not feasible for large-scale flood inundation analysis (e.g., events involving both rural and urban areas), both in terms of model preparation and computational burden [Apel et al., 2009; Neal et al., 2009]. In this case, we think that modelers should prefer approaches that can consistently incorporate data from different resolutions in flood modeling.

6. Keep It as Simple as Possible: The Use of High-Resolution Data in Flood Models

[27] Many recent research works have focused on the integration of high-resolution data and models. A comprehensive review is difficult as several methods have been proposed in literature (readers should refer to cited bibliography for detailed descriptions), and it would be beyond the scope of this paper; however, it is possible to identify some general approaches.

[28] A first approach consists of reproducing small-scale features through model mesh refinement, accordingly to available topographic detail. This grid refining (GR) approach is especially suited to represent complex urban topography in combination with unstructured grid. Buildings can be represented using the building-block method, whereby spatially distributed ground elevation data are raised to the heights of rooftops [Schubert et al., 2008; Fewtrell et al., 2011; Schubert and Sanders, 2012]. Alternatively, the computational mesh can be generated with holes aligned with building walls, which is called building-hole method [Schubert and Sanders, 2012]. A substantially different approach is based on roughness upscaling (RU) methods; in this case, the influence of microtopography is represented as an additional resistance on flow by a proper upscaling of resistance parameters [Neelz and Pender, 2007; Yu and Lane, 2006a, 2006b; Soares-Frazão et al., 2008; Gallegos et al., 2009; Schubert and Sanders, 2012]. Increase of roughness may be obtained by analysing terrain feature heights, such as vegetation [Schubert et al., 2008] or according to existing databases based on land use [Gallegos et al., 2009]. However, with this approach, resistance parameters tends to become more conceptual parameters, meaning that model structure and grid resolution should be considered when assigning parameter values, as mentioned in section 'Accuracy and Precision: Uncertainty in Model Structure'.

[29] Finally, it is possible to group methods together based on different subgrid (SG) parameterization methods. These methods aim at conserving the available topographic detail without an explicit reproduction in the modeling mesh. Some authors developed detailed techniques to represent the effect of small-scale features on storage and conveyance, including anisotropy [Sanders et al., 2008; Guinot, 2012; Schubert and Sanders, 2012; Neal et al., 2012], while other authors proposed the use of an isotropic porosity parameter, in analogy to groundwater flow [Guinot and Soares-Frazão, 2006; Soares-Frazão et al., 2008; Cea and Vázquez-Cendón, 2010]. In all these works, 2-D models based on fully dynamic shallow water equations (sometimes including shock treatments) were applied, reformulating the governing equations to account for subgrid parameterization. This included additional head losses generated by the nonresolved obstructions, and additional distributed losses generated by longer flow paths. On the other hand, other authors used a geometrical approach for developing SG treatment within 2-D models based on diffusive equations. High-resolution data are used to develop explicit parameterization of subgrid-scale topographic variability, modifying storage, and conveyance parameters in model equations [Yu and Lane, 2006b; Mc Millan and Brasington, 2007; Yu and Lane, 2011; Chen et al., 2012; Dottori and Todini, 2012]. It is important to note that SG techniques can be applied to reproduce microtopography also in rural areas, for instance to resolve drainage network and embankments.

[30] Several authors have carried out comprehensive comparisons of these alternative methods, pointing out the advantages and disadvantages of each. The increase of the model complexity required by each approach was recently evaluated by Schubert and Sanders [2012]. According to these authors, RU methods are the simplest to apply as in practice they just require the calibration of resistance parameters; SG approaches require a considerable grid processing phase to compute porosity or subgrid parameters; explicit reproduction of small-scale features by GR methods can be located in between these two extremes. The precision attainable by the mentioned methods is variable. GR methods allow for detailed representation of microtopography and local flow features, for instance velocity field, but they require high-mesh resolutions to provide a good accuracy, and performances can decrease quickly with grid coarsening [Neal et al., 2011b; Schubert and Sanders, 2012]. Considering this, in our opinion GR methods may be useful in small or medium-scale analyses (up to 5–10 km2) and when high modeling detail is required.

[31] On the other hand, SG methods cannot represent local flow processes in detail, especially when isotropic porosity approaches are used and high-energy flows occur [Guinot and Soares-Frazão, 2006; Sanders et al. 2008]; however, overall flow processes are well reproduced and when specific upscaling of resistance parameters is applied, the accuracy is well within the uncertainty of observed data [Sanders et al., 2008; Soares-Frazão et al., 2008; Guinot, 2012]. Similar results were observed also in simulations with RU approaches [Soares-Frazão et al., 2008; Schubert and Sanders, 2012], suggesting that RU and SG methods can be especially useful in combination with zero-inertia models (see section 'Accuracy and Precision: Uncertainty in Model Structure'). Moreover, SG and RU methods are less sensitive to grid resolution, as model precision is partially decoupled from mesh resolution. That is, coarser resolutions can be used with respect to GR approaches without significant losses of accuracy in model outputs, improving computational efficiency [Soares-Frazão et al., 2008; Sanders et al., 2008; Cea and Vázquez-Cendón, 2010; Yu and Lane, 2011]. It should also be noted that SG (adopting porosity approach) and RU methods can be applied where only coarse data sets, like land use maps or aerial or satellite images, are available to describe urban areas. Therefore, these approaches are also well suited to large-scale applications. Last but not least, SG and RU approaches offer a further advantage, in our opinion: they force modelers to reflect on the physics, to focus on the essential processes of interest, and to balance complexity and reliability.

7. Conclusions

[32] New survey techniques provide a large amount of high-resolution data, and we think that these data should form the basis of hydraulic modeling whenever they are available. However, the issue of how to use this valuable data to extract the right information for modeling purposes requires a more in-depth discussion, which has motivated this opinion paper.

[33] We believe that high expectations of realism regarding these data sets should be tempered as some important issues have to be considered. Result accuracy is not necessarily increased by higher resolution, as different limitations and uncertainty sources will always affect flood inundation modeling. Possible uncertainty sources should be always identified and evaluated, for instance through multiple scenario analyses. Modelers should bear in mind the practical use of their results, which is to provide reliable flood mapping and inundation extent predictions. To this end, too much detail is often not only unnecessary but also potentially misleading as it could induce overconfidence in numerical outputs and lead to a “reductionist” approach in flood modeling. In other words, we share the Keynesian view that it is better to be “approximately right, rather than precisely wrong.” Moreover, there is a consensus that models should also serve as communication tools in the processes of flood risk management [Bloeschl, 2006]. Sophisticated high-resolution models might be dangerous from this viewpoint as the false sense of confidence derived from their spuriously precise results might lead to making the wrong decisions, which should instead be mainly based on transparent and participative processes.

[34] We think, therefore, that the choice of modeling resolution should be part of a wider analysis, aiming at the selection of appropriate model complexity and detail for each specific case study [Hunter et al., 2007; Neelz and Pender, 2010; Neal et al., 2011b]. Within this framework, the method to incorporate available topographic data into the model structure is the key to achieving the best compromise between detail (or precision), maximum expected accuracy (which depends more on available data set than mesh resolution), and computational efficiency. In our opinion, the use of subgrid treatments, combined with appropriate upscaling of resistance parameters, would in many instances be a suitable approach to better exploit the available topographic details, while being consistent with the uncertainty related to model structure and the other input data available for model building and evaluation.


[35] The authors would like to thank Roland Price for his precious comments and suggestions. The editor Hoshin Goupta, Paul Bates, Hilary McMillan, and three anonymous reviewers are also acknowledged for their valuable criticisms, comments, and suggestions on an early version of this paper. Giuliano Di Baldassarre was partially supported by the EC FP7 Project KULTURisk: “Knowledge-based approach to develop a cULTUre of Risk prevention” (grant agreement 265280) for the development of this research work.