Wind Power as a Case Study

Improving Life Cycle Assessment Reporting to Better Enable Meta-analyses

Authors

  • Lindsay Price,

  • Alissa Kendall


Lindsay Price, Department of Civil and Environmental Engineering, University of California, One Shields Avenue, Davis, CA 95616, USA. Email:lrprice@ucdavis.edu

Summary

Meta-analyses of life cycle assessments (LCAs) have become increasingly important in the context of renewable energy technologies and the decisions and policies that influence their adoption. However, a lack of transparency in reporting modeling assumptions, data, and results precludes normalizing across incommensurate system boundaries or key assumptions. This normalization step is critical for conducting valid meta-analyses.

Thus it is necessary to establish clear methods for assessing transparency and to develop conventions for LCA reporting that promote future comparisons. While concerns over transparency in LCA have long been discussed in the literature, the methods proposed to address these concerns have not focused on the transparency and reporting characteristics required for performing meta-analyses. In this study we identify guidelines for assessing reporting transparency that anticipate the needs of meta-analyses of LCA applied to renewable energy technologies.

These guidelines were developed after an attempt to perform a meta-analysis on wind turbine LCAs of 1 megawatt and larger, with the goal of determining how life cycle performance, as measured by global warming intensity, might trend with turbine size. The objective was to normalize system boundaries and environmental conditions, and reinterpret global warming potential with new impact assessment methods. Previous wind LCAs were reviewed and assessed for reporting transparency. Only a small subset of studies proved to be sufficiently transparent for the normalization of system boundaries and modeling assumptions required for meta-analyses.

Introduction

This study began as a meta-analysis of modern wind turbines, focusing on greenhouse gas (GHG) emissions intensity. The outcome of the meta-analysis would include new impact assessment methods for calculating global warming potential and assessment of the effect of turbine size on GHG emissions intensity. However, as we undertook what was hoped to be a comprehensive but relatively straightforward meta-analysis, we found that the lack of transparency in reporting across many of the reviewed studies impeded our analysis. Effective meta-analyses require that system boundaries and analysis parameters be made commensurate across studies prior to comparisons of systems, and that life cycle assessment (LCA) results be presented with enough granularity that they can be reinterpreted when new understanding or knowledge becomes available.

As meta-analyses of LCAs become increasingly important in the context of renewable energy technologies and the policies that influence their adoption, establishing clear methods for assessing reporting transparency and creating conventions for reporting LCAs that support meta-analysis are necessary.

In this study we present guidelines for assessing reporting transparency that anticipate what is needed for meta-analyses of LCAs applied to renewable energy technologies. While concerns over transparency in LCA have long been reported, and some methods have been proposed to address these concerns, they have not focused on the requirements for performing meta-analysis.

Summary of the Literature

The focus of this study was originally the use of meta-analysis to identify life cycle impact trends with respect to wind turbine size. Particular interest existed for turbines exceeding 1 megawatt (MW), as trends reported by previous meta-analyses have been inconclusive for turbines that are larger than 1 MW.1

All published wind power LCAs or discussions of those studies that have been published since 1998 were included in the initial review. The reviewed studies are listed in the Supporting Information available on the Journal's Web site. Of 39 studies reviewed, 18 studies provided original LCA data for wind turbines in the 1 to 5 MW range.

LCAs for wind turbines typically have one of the following goals: comparing two sizes of wind turbine (e.g., Crawford 2009; Lee and Tzeng 2008); comparing wind energy to other renewable energy sources (e.g., Varun et al. 2009); and sensitivity analysis of a parameter other than turbine size, such as transport distance (e.g., Tremeac and Meunier 2009), on life cycle performance. Two meta-analyses (Kubiszewski et al. 2010; Lenzen and Munksgaard 2002) are also part of the existing wind turbine LCA literature. They are discussed below.

Lenzen and Munksgaard (2002) performed a meta-analysis to explain the variability in LCA results for wind turbines by surveying 72 studies published over the two decades preceding the study, from 1981 to 2001. The turbine sizes ranged from 0.3 kilowatts (kW) to 6.6 MW.2 The study found energy intensity ranged from 0.012 to 1.016 kilowatt-hour input per kilowatt-hour of electricity generated (kWhin/kWhel) before the wind load was normalized, demonstrating significant variability in energy intensity and carbon dioxide (CO2) intensity across the wind turbine LCAs.3

Lenzen and Munksgaard then normalized the data for lifetime electrical output by assuming a 20-year lifetime and a load factor, or the percent of nameplate capacity achieved, of 25%. After this normalization the energy intensity was still found to vary by an order of magnitude from 0.014 to 0.15 kWhin/kWhel. Some of this variation is attributed to economies of scale, and though the results show substantial scatter, their regression analysis indicates that smaller wind turbines require more life cycle energy per unit power.

To account for the scatter, Lenzen and Munksgaard identify procedural differences as well as sources of real differences across studies and turbine designs. The procedural differences among the studies are (1) energy intensity of the materials; (2) the scope and system boundaries of the studies, referred to as “breadth”; and (3) the analysis methodology, referred to as “depth.” The parameters that seem to have a real effect on the energy and CO2 intensity of the turbines, in addition to turbine size economies of scale, are (1) the country of manufacture, (2) the recycling and end of life treatment, (3) tower material composition, and (4) the fuel or electricity grid mix used in production.

Lenzen and Munksgaard's study emphasizes that useful comparisons in terms of energy intensity cannot currently be made between various power generation technologies because the uncertainty in these results exceeds the differences among average results. Their study calls for an improvement in the depth and breadth of studies (as defined above) in order to standardize the way that turbine energy intensity is calculated. They also advocate for the use of hybrid LCA (a combination of economic input-output LCA and process-based LCA) to reduce system boundary differences.

Kubiszewski and colleagues (2010) published a meta-analysis of the net energy return based on a survey of wind turbine literature, including 119 turbines from 50 studies published between 1977 and 2007. The turbine sizes ranged from 0.0003 to 7.2 MW. While Lenzen and Munksgaard reported results in terms of energy intensity, Kubiszewski and colleagues used the inverse metric, energy return on investment (EROI). The EROI is the ratio of energy delivered to energy input over the life cycle. The average EROI for all studies was found to be 25.2 with a standard deviation of 22.3.

Kubiszewski and colleagues added 47 studies to those first evaluated by Lenzen and Munksgaard in 2002. In addition, their study distinguished between conceptual and operational studies. This marks a departure from Lenzen and Munksgaard's normalization process and emphasizes studies that report actual wind data. It is worth noting that Lenzen and Munksgaard's normalization intentionally eliminated these actual data, as their study normalized the load factor for all turbines to 25%. One shortcoming of Lenzen and Munksgaard's approach to normalization is that it eliminates the effects of increasing wind speeds at elevated heights, which can be one of the main advantages of taller wind turbines.

When Kubiszewski and colleagues removed conceptual studies from the meta-analysis, and only operational studies remained, the average EROI was found to be 19.18 with a standard deviation of 13.7 based on 60 studies. Kubiszewski and colleagues could only show trends for EROI with respect to power rating for turbines that were sized under 1 MW. For the less than 1 MW turbines, it was found that EROI increased with power rating. Suggested explanations for this trend were economies of scale and larger rotor diameter.

Kubiszewski and colleagues sought to plot trends for the response of EROI to a turbine's power rating, rotor diameter, and wind speed for all turbine sizes. Unfortunately, it was only possible to plot these trends for turbines that were less than 1 MW because the study concluded that LCAs of turbines greater than 1 MW lacked sufficient data or contained unreliable data. Kubiszewski and colleagues eliminated 25% of the 119 studies they reviewed; including 14% of the operational studies and 22% of the studies that they had added to supplement the studies surveyed in Lenzen and Munksgaard's study. There was no explanation as to why the data were considered unreliable. The conclusion that studies lacked sufficient data or used unreliable data can be traced, in part, to a lack of transparency in reporting modeling inputs and outputs.

Both Kubiszewski and colleagues and Lenzen and Munksgaard noted considerable scatter for the life cycle performance of wind turbines rated at 1 MW and larger. A meta-analysis would be one way to assess whether this scatter is caused by real differences in performance or is caused by differences in how LCA methods are implemented. However, because study reporting was not transparent enough, meta-analysis of turbines rated at 1 MW or greater could not be implemented, and a key question about wind power—namely whether economies of scale are attainable for energy and environmental performance for large turbines—remains unanswered.

Challenges for Conducting Meta-Analyses

The original goals of this study were (1) to evaluate how the GHG emissions intensity of wind turbines trended with turbine size, holding environmental conditions for wind turbine installations constant; and (2) to reinterpret study results for GHG emissions intensity with recent work on the role of emissions timing in determining the global warming effect of GHG emissions (Kendall and Chang 2009; Levasseur et al. 2010; O’Hare et al. 2009). This work on emissions timing shows that typical emissions intensity reporting (e.g., CO2-equivalents per megajoule) for renewable energy technologies may underestimate global warming potential by 40% to 50% for capital-intensive projects (which concentrate the energy and emissions investment in the manufacturing and construction stage) such as wind power installations. The parameters under investigation were to be turbine size, geographic location, and end of life treatment—all factors that vary more between studies than within studies.

As demonstrated by Farrell and colleagues (2006), developing commensurate system boundaries among studies is critical for performing an effective meta-analysis. For studies that clearly do not consider particular processes or life cycle stages—for example, many wind turbine LCAs do not assess the decommissioning stage—data from studies that include this stage can be used to create similar system boundaries across studies. However, to adjust system boundaries in a meta-analysis, all studies must first clearly report what boundaries were used and describe key assumptions. The most common differences in system boundaries for wind turbines were exclusion of decommissioning, exclusion of construction processes, and incomplete analysis of a life cycle stage. However, in some cases identifying differing system boundaries is not possible due to a lack of detail and transparency in reporting. We label this problem a lack of qualitative input transparency.

Inputs may also lack quantitative transparency, meaning that the actual inputs to the system evaluated are not reported, or are reported with insufficient detail. In this case, a lack of transparency prevents replication or recalculation of a study when new or more relevant data are available, and thus may prevent a meaningful meta-analysis if data sources cannot be made commensurate. We refer to this problem as a lack of quantitative input transparency.

Clearly transparency in modeling output is also important for performing meta-analysis. The granularity in reporting outputs, or quantitative output transparency, can either facilitate or obstruct reinterpretation of a study's results. For example, the summing of GHG emissions across all life cycle stages precludes a meta-analysis that considers emissions timing, which only recently has arisen as a notable problem in LCA impact assessment methods. Summing across life cycle stages also precludes changes to a particular life cycle stage, such as changes to wind ratings for developing commensurate environmental conditions for wind turbine technology comparison.

Table 1 shows a matrix that defines the qualitative and quantitative measures of transparency relevant to meta-analysis of LCAs.

Table 1.  Classification of reporting transparency
  Input transparency Output transparency
QuantitativeNecessary for replicability of study and recalculation of study with new informationReporting granularity. Required for reinterpreting results.
QualitativeUse for assessment of a study's completeness and a record of modeling parameters and assumptions 

Critical Review of Wind Power Life Cycle Assessments

Calls for greater reporting transparency in LCA are numerous (e.g., Frischknecht 2004; Molina-Murillo and Smith 2009) and are particularly noted in LCA comparison studies (e.g., Basset-Mens 2008). Here we address these topics with particular reference to the problems encountered in attempting to do a meta-analysis of wind energy installations.

Reporting guidelines were developed according to the diagram shown in figure 1. This framework was developed to document the level of reporting transparency that would be most useful for conducting the proposed meta-analysis.

Figure 1.

Reporting transparency guidelines. This framework documents the level of reporting transparency that would be most useful for conducting the proposed meta-analysis.

As shown in figure 1, completeness is scored in terms of the number of life cycle stages included and the level of detail with which they are modeled. Input transparency is scored in terms of the availability of enough data for the reader to recalculate or adjust the results. Output transparency is scored with special attention to the detailed outputs that aid in a meta-analysis, which we refer to as granularity.

The reporting guidelines were used to generate scores for a critical review of the original wind LCAs included in this literature review. As such, the critical review serves as a demonstration of the reporting guidelines and provides a case study for how they might aid meta-analysis.

Of the 39 studies included in this literature review (see table S1 in the supporting information on the Web), only 18 contained original LCA data. These studies were made part of the critical review.

Critical Review Process

To assess the completeness and transparency of earlier studies, the 18 LCAs listed in table 2 were evaluated using the proposed reporting guidelines. This was done by creating a scoring rubric for study compliance with each of the questions detailed in figure 1. To increase replicability and decrease subjectivity of scoring, all answers were binary, with a “yes” scored as a one, and a “no” scored as a zero.

Table 2.  Study scoring results
Primary author Year Region Energy payback time (months) Delivered energy intensity (MJ/kWh) Energy intensity by rating (TJ/MW) GHG intensity (g CO2/kWh) Completeness Input transparency Output granularity
  1. Note: Completeness refers to number of life cycle stages included and the level of modeling detail. Input transparency refers to the availability of data for recalculation or results adjustment. Output transparency pays special attention to the detailed outputs (or granularity) that aid in meta-analysis.

  2. MJ/kWh = megajoules per kilowatt-hour; TJ/MW = terajoules per megawatt; g CO2/kWh = grams carbon dioxide per kilowatt-hour.

Ardente2008Italy3–6.50.144–0.25268.88.8–18.5905
Chataignere2003EuropeNANANA7.5–12.291010
Crawford2009Australia32–480.235–39185–168403
Elsarn2004Denmark3.1–9NA6.5–11.06.8–7.6905
Gurzenich1999India/Germany11.3–18.61.211.346.4408
Jungbluth2005SwitzerlandNANANA11–13400
Lee2008Taiwan1.30.054.9–133.6306
Lenzen2004Brazil/Germany6–490.09–0.7710.5–23.02–81306
Martinez2009Northern Spain6.960.000421.166.58704
McCulloch2000CanadaNANANA13500
Nalukowe2006Denmark11.80.189.35NA445
Pehnt2006GermanyNA0.11–0.12NA10.2–8.9202
Rule2009New ZealandNA0.0723.93624
Schleisner2000Denmark4.68–3.121.6–2.48.7–9.49.7–16.5624
Tremeac2009Southern France20.40.3–1.210–1916–46.4926
Vestas2005Denmark6.80.09–0.15–105.2–4.6804
Weinzettel2009NA13NANANA761
White1998Midwest United States13.30.28.5–11.58.9–20.281010

The questions composing each of the three reporting metrics (output granularity, completeness, and input transparency) were evenly weighted such that the highest possible metric score summed to ten. As a result, each metric uses a ten-point integer scale. Measures were taken to avoid double-counting negative responses; if a question received a score of zero, the dependent questions were no longer included as part of the scoring process.

Results from Critical Review of Wind Turbine Life Cycle Assessments

The results of applying the reporting guideline scoring rubric to the studies are shown in table 2, along with summarized results of the original LCAs.

Only the studies with a high completeness score can be considered for inclusion in a meta-analysis without substantial changes. However, in most cases system boundaries must still be made commensurate, and this is only possible for studies with a high output granularity. Of the studies reviewed here, only Chataignere and Le Boulch (2003) and White and Kulcinski (1998) provide data with this level of reporting transparency.

Studies with midrange completeness rankings but high output granularity may still be useful in meta-analyses, however. In these cases, it may be possible to substitute appropriately scaled life cycle stage data from other studies in order to permit inclusion in a meta-analysis. Gürzenich and colleagues (1999), Lee and Tzeng (2008), Lenzen and Waschmann (2004), and Tremeac and Meunier (2009) all received reporting metric scores that suggest these types of substitutions merit further investigation.

Conclusion

The scoring methodology presented remains flexible and should be tailored for the individual study or the appropriate LCA application. The key concept proposed in the reporting framework is to transparently report quantitative inputs and provide high granularity in quantitative outputs whenever possible. If this is not possible, such as in the case of confidential processes or technologies, a detailed qualitative description of the assumptions made in modeling each life cycle stage are then recommended in order to promote the future usefulness of the study.

When reviewing this study's scoring results (see table 2), it is important to note that the reporting metric scores exist apart from any evaluation of the validity of results or data quality of the evaluated studies. However, in order to promote future usefulness of their study results, authors should aim to provide sufficient reporting for meta-analysis as described by the guidelines in this study. Alternatively, the combination of these metrics with data quality indicators (DQIs) such as those suggested by Weidema and Wesnęs (1996) or Rousseaux and colleagues (2001) could provide a workaround for determining trends in life cycle output data with respect to turbine size when reporting limitations (such as confidentiality or reporting length restrictions) do not allow for sufficient transparency.

Acknowledgements

The authors would like to thank the Hellman family's generous support of junior faculty in the University of California system through the Hellman Fellows Fund, which supported this research (Alissa Kendall, Hellman Fellow 2009–2010). The authors would also like to thank the reviewers of this manuscript for their valuable and insightful feedback.

Notes

  • 1

    One megawatt (MW) = 106 watts (W, SI) = 1 megajoule/second (MJ/s) ≈ 56.91 × 103 British thermal units (BTU)/minute.

  • 2

    One kilowatt (kW) ≈ 56.91 British thermal units (BTU)/minute ≈ 1.341 horsepower (HP).

  • 3

    One kilowatt-hour (kWh) ≈ 3.6 × 106 joules (J, SI) ≈ 3.412 × 103 British thermal units (BTU).

About the Authors

Lindsay Price is a PhD student and Alissa Kendall an assistant professor in the Department of Civil and Environmental Engineering, University of California, Davis, CA, USA.

Ancillary