This study began as a meta-analysis of modern wind turbines, focusing on greenhouse gas (GHG) emissions intensity. The outcome of the meta-analysis would include new impact assessment methods for calculating global warming potential and assessment of the effect of turbine size on GHG emissions intensity. However, as we undertook what was hoped to be a comprehensive but relatively straightforward meta-analysis, we found that the lack of transparency in reporting across many of the reviewed studies impeded our analysis. Effective meta-analyses require that system boundaries and analysis parameters be made commensurate across studies prior to comparisons of systems, and that life cycle assessment (LCA) results be presented with enough granularity that they can be reinterpreted when new understanding or knowledge becomes available.
As meta-analyses of LCAs become increasingly important in the context of renewable energy technologies and the policies that influence their adoption, establishing clear methods for assessing reporting transparency and creating conventions for reporting LCAs that support meta-analysis are necessary.
In this study we present guidelines for assessing reporting transparency that anticipate what is needed for meta-analyses of LCAs applied to renewable energy technologies. While concerns over transparency in LCA have long been reported, and some methods have been proposed to address these concerns, they have not focused on the requirements for performing meta-analysis.
Summary of the Literature
The focus of this study was originally the use of meta-analysis to identify life cycle impact trends with respect to wind turbine size. Particular interest existed for turbines exceeding 1 megawatt (MW), as trends reported by previous meta-analyses have been inconclusive for turbines that are larger than 1 MW.1
All published wind power LCAs or discussions of those studies that have been published since 1998 were included in the initial review. The reviewed studies are listed in the Supporting Information available on the Journal's Web site. Of 39 studies reviewed, 18 studies provided original LCA data for wind turbines in the 1 to 5 MW range.
LCAs for wind turbines typically have one of the following goals: comparing two sizes of wind turbine (e.g., Crawford 2009; Lee and Tzeng 2008); comparing wind energy to other renewable energy sources (e.g., Varun et al. 2009); and sensitivity analysis of a parameter other than turbine size, such as transport distance (e.g., Tremeac and Meunier 2009), on life cycle performance. Two meta-analyses (Kubiszewski et al. 2010; Lenzen and Munksgaard 2002) are also part of the existing wind turbine LCA literature. They are discussed below.
Lenzen and Munksgaard (2002) performed a meta-analysis to explain the variability in LCA results for wind turbines by surveying 72 studies published over the two decades preceding the study, from 1981 to 2001. The turbine sizes ranged from 0.3 kilowatts (kW) to 6.6 MW.2 The study found energy intensity ranged from 0.012 to 1.016 kilowatt-hour input per kilowatt-hour of electricity generated (kWhin/kWhel) before the wind load was normalized, demonstrating significant variability in energy intensity and carbon dioxide (CO2) intensity across the wind turbine LCAs.3
Lenzen and Munksgaard then normalized the data for lifetime electrical output by assuming a 20-year lifetime and a load factor, or the percent of nameplate capacity achieved, of 25%. After this normalization the energy intensity was still found to vary by an order of magnitude from 0.014 to 0.15 kWhin/kWhel. Some of this variation is attributed to economies of scale, and though the results show substantial scatter, their regression analysis indicates that smaller wind turbines require more life cycle energy per unit power.
To account for the scatter, Lenzen and Munksgaard identify procedural differences as well as sources of real differences across studies and turbine designs. The procedural differences among the studies are (1) energy intensity of the materials; (2) the scope and system boundaries of the studies, referred to as “breadth”; and (3) the analysis methodology, referred to as “depth.” The parameters that seem to have a real effect on the energy and CO2 intensity of the turbines, in addition to turbine size economies of scale, are (1) the country of manufacture, (2) the recycling and end of life treatment, (3) tower material composition, and (4) the fuel or electricity grid mix used in production.
Lenzen and Munksgaard's study emphasizes that useful comparisons in terms of energy intensity cannot currently be made between various power generation technologies because the uncertainty in these results exceeds the differences among average results. Their study calls for an improvement in the depth and breadth of studies (as defined above) in order to standardize the way that turbine energy intensity is calculated. They also advocate for the use of hybrid LCA (a combination of economic input-output LCA and process-based LCA) to reduce system boundary differences.
Kubiszewski and colleagues (2010) published a meta-analysis of the net energy return based on a survey of wind turbine literature, including 119 turbines from 50 studies published between 1977 and 2007. The turbine sizes ranged from 0.0003 to 7.2 MW. While Lenzen and Munksgaard reported results in terms of energy intensity, Kubiszewski and colleagues used the inverse metric, energy return on investment (EROI). The EROI is the ratio of energy delivered to energy input over the life cycle. The average EROI for all studies was found to be 25.2 with a standard deviation of 22.3.
Kubiszewski and colleagues added 47 studies to those first evaluated by Lenzen and Munksgaard in 2002. In addition, their study distinguished between conceptual and operational studies. This marks a departure from Lenzen and Munksgaard's normalization process and emphasizes studies that report actual wind data. It is worth noting that Lenzen and Munksgaard's normalization intentionally eliminated these actual data, as their study normalized the load factor for all turbines to 25%. One shortcoming of Lenzen and Munksgaard's approach to normalization is that it eliminates the effects of increasing wind speeds at elevated heights, which can be one of the main advantages of taller wind turbines.
When Kubiszewski and colleagues removed conceptual studies from the meta-analysis, and only operational studies remained, the average EROI was found to be 19.18 with a standard deviation of 13.7 based on 60 studies. Kubiszewski and colleagues could only show trends for EROI with respect to power rating for turbines that were sized under 1 MW. For the less than 1 MW turbines, it was found that EROI increased with power rating. Suggested explanations for this trend were economies of scale and larger rotor diameter.
Kubiszewski and colleagues sought to plot trends for the response of EROI to a turbine's power rating, rotor diameter, and wind speed for all turbine sizes. Unfortunately, it was only possible to plot these trends for turbines that were less than 1 MW because the study concluded that LCAs of turbines greater than 1 MW lacked sufficient data or contained unreliable data. Kubiszewski and colleagues eliminated 25% of the 119 studies they reviewed; including 14% of the operational studies and 22% of the studies that they had added to supplement the studies surveyed in Lenzen and Munksgaard's study. There was no explanation as to why the data were considered unreliable. The conclusion that studies lacked sufficient data or used unreliable data can be traced, in part, to a lack of transparency in reporting modeling inputs and outputs.
Both Kubiszewski and colleagues and Lenzen and Munksgaard noted considerable scatter for the life cycle performance of wind turbines rated at 1 MW and larger. A meta-analysis would be one way to assess whether this scatter is caused by real differences in performance or is caused by differences in how LCA methods are implemented. However, because study reporting was not transparent enough, meta-analysis of turbines rated at 1 MW or greater could not be implemented, and a key question about wind power—namely whether economies of scale are attainable for energy and environmental performance for large turbines—remains unanswered.
Challenges for Conducting Meta-Analyses
The original goals of this study were (1) to evaluate how the GHG emissions intensity of wind turbines trended with turbine size, holding environmental conditions for wind turbine installations constant; and (2) to reinterpret study results for GHG emissions intensity with recent work on the role of emissions timing in determining the global warming effect of GHG emissions (Kendall and Chang 2009; Levasseur et al. 2010; O’Hare et al. 2009). This work on emissions timing shows that typical emissions intensity reporting (e.g., CO2-equivalents per megajoule) for renewable energy technologies may underestimate global warming potential by 40% to 50% for capital-intensive projects (which concentrate the energy and emissions investment in the manufacturing and construction stage) such as wind power installations. The parameters under investigation were to be turbine size, geographic location, and end of life treatment—all factors that vary more between studies than within studies.
As demonstrated by Farrell and colleagues (2006), developing commensurate system boundaries among studies is critical for performing an effective meta-analysis. For studies that clearly do not consider particular processes or life cycle stages—for example, many wind turbine LCAs do not assess the decommissioning stage—data from studies that include this stage can be used to create similar system boundaries across studies. However, to adjust system boundaries in a meta-analysis, all studies must first clearly report what boundaries were used and describe key assumptions. The most common differences in system boundaries for wind turbines were exclusion of decommissioning, exclusion of construction processes, and incomplete analysis of a life cycle stage. However, in some cases identifying differing system boundaries is not possible due to a lack of detail and transparency in reporting. We label this problem a lack of qualitative input transparency.
Inputs may also lack quantitative transparency, meaning that the actual inputs to the system evaluated are not reported, or are reported with insufficient detail. In this case, a lack of transparency prevents replication or recalculation of a study when new or more relevant data are available, and thus may prevent a meaningful meta-analysis if data sources cannot be made commensurate. We refer to this problem as a lack of quantitative input transparency.
Clearly transparency in modeling output is also important for performing meta-analysis. The granularity in reporting outputs, or quantitative output transparency, can either facilitate or obstruct reinterpretation of a study's results. For example, the summing of GHG emissions across all life cycle stages precludes a meta-analysis that considers emissions timing, which only recently has arisen as a notable problem in LCA impact assessment methods. Summing across life cycle stages also precludes changes to a particular life cycle stage, such as changes to wind ratings for developing commensurate environmental conditions for wind turbine technology comparison.
Table 1 shows a matrix that defines the qualitative and quantitative measures of transparency relevant to meta-analysis of LCAs.
|Input transparency||Output transparency|
|Quantitative||Necessary for replicability of study and recalculation of study with new information||Reporting granularity. Required for reinterpreting results.|
|Qualitative||Use for assessment of a study's completeness and a record of modeling parameters and assumptions|