Improved laser ablation U-Pb zircon geochronology through robust downhole fractionation correction



[1] Elemental fractionation effects during analysis are the most significant impediment to obtaining precise and accurate U-Pb ages by laser ablation ICPMS. Several methods have been proposed to minimize the degree of downhole fractionation, typically by rastering or limiting acquisition to relatively short intervals of time, but these compromise minimum target size or the temporal resolution of data. Alternatively, other methods have been developed which attempt to correct for the effects of downhole elemental fractionation. A common feature of all these techniques, however, is that they impose an expected model of elemental fractionation behavior; thus, any variance in actual fractionation response between laboratories, mineral types, or matrix types cannot be easily accommodated. Here we investigate an alternate approach that aims to reverse the problem by first observing the elemental fractionation response and then applying an appropriate (and often unique) model to the data. This approach has the versatility to treat data from any laboratory, regardless of the expression of downhole fractionation under any given set of analytical conditions. We demonstrate that the use of more complex models of elemental fractionation such as exponential curves and smoothed cubic splines can efficiently correct complex fractionation trends, allowing detection of spatial heterogeneities, while simultaneously maintaining data quality. We present a data reduction module for use with the Iolite software package that implements this methodology and which may provide the means for simpler interlaboratory comparisons and, perhaps most importantly, enable the rapid reduction of large quantities of data with maximum feedback to the user at each stage.

1. Introduction

[2] U-(Th)-Pb zircon geochronology by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) has seen a dramatic rise in popularity in the last decade [Kosler and Sylvester, 2003]. While the precision of the method cannot rival that of the benchmark isotope dilution thermal ionization mass spectrometry (ID-TIMS), and its spatial resolution cannot easily compete with less destructive secondary ion mass spectrometry (SIMS) techniques, for many applications these drawbacks are outweighed by the low cost and rapid sample throughput achieved by LA-ICPMS, and many research facilities now have in-house laser ablation capabilities.

[3] A natural consequence of this rapid proliferation, combined with the relative immaturity of the technique, is an immense diversity in both the instrumentation and methodologies employed by individual facilities, producing significant differences in observed downhole elemental fractionation behavior—arguably the single largest influence on the accuracy and precision of the method [e.g., Horn et al., 2000; Jackson et al., 2004; Kosler et al., 2001]. This has resulted in a large number of in-house data reduction methods, with no firm consensus on how the correction for downhole fractionation should best be performed. There is also little transparency in other aspects of data reduction, notably propagation of uncertainties, making interlaboratory comparisons of data sets difficult.

[4] All LA-ICPMS systems inherently produce some degree of elemental- and/or mass-related discrimination during sampling. These so-called “instrumental” biases are generally observed to drift over the course of days or weeks, but may also vary detectably within a single analytical session. In addition to these relatively long term effects, elemental biases occur during laser ablation which vary on a short time scale as the ablation pit deepens. In U-Th-Pb dating, this “downhole fractionation” produces readily resolvable changes in measured Pb/U and Pb/Th ratios, and hence apparent age of the ablation target. Numerous studies have investigated the underlying causes of such behavior [Eggins et al., 1998; Hergenröder, 2006a, 2006b; Kosler et al., 2005], but it remains unclear which of many possible processes dominate, and how they vary with time. What is clear is that the interaction between these is complex, and that variables such as laser wavelength, the aspect ratio of the laser pit, choice of carrier gas, and gas flow create a myriad of potential trends in elemental fractionation.

[5] Although the use of rastering (which involves continuously traversing the laser across the sample surface) can effectively eliminate variations in elemental fractionation [e.g., Horstwood et al., 2003], this approach does limit the minimum possible target surface area (i.e., the 2-D spatial resolution) to several times larger than the laser spot diameter. Similarly, although fractionation can be minimized by employing short ablation times (e.g., 10 s), this approach limits the likelihood of detecting Pb loss, age zoning, etc., while also compromising data quality. For this reason, a number of researchers have proposed methods that attempt to correct for downhole fractionation effects.

[6] The most popular approach is the modeling of elemental fractionation using a linear regression, in which case the slope of the fractionation versus time trend is either calculated using an empirical relationship to spot size and hole depth [Horn et al., 2000], or the fitting of separate gradients to each sample analysis [Kosler et al., 2002a]. There are assumptions required in each of the above methods but the single largest drawback to their applicability is the fact that elemental fractionation is not always linear. As such, any application of the approach to nonlinear fractionation trends will produce inaccurate results. Although a correction method using the average of the standard over an interval identical to that of the sample can be employed [Jackson et al., 2004], this approach sacrifices temporal resolution (i.e., the time-resolved aspect of laser ablation data), making it more difficult to detect either regions of compromised data or true age zonation.

[7] This paper investigates new methods for the correction of laser-induced elemental fractionation, with the aim of establishing protocols that retain as much 2-D (ablated area) and depth (the time-resolved aspect of corrected data) resolution as possible, while still retaining sufficient flexibility to process data from any laboratory, exhibiting any degree of complexity. We also address the propagation of uncertainties during the correction procedure.

2. Analytical Methods

[8] All analyses were conducted at the School of Earth Sciences, University of Melbourne, employing a prototype of the Varian 810 quadrupole ICPMS coupled to a HelEx laser ablation system that utilizes a 193 nm ArF excimer laser. The laser was operated with an output energy of ∼70 mJ per pulse, providing an estimated power density on the sample of <5 J cm−2. A full list of instrumental parameters is included in Table 1. For full details of the Helex ablation system we refer the reader to Woodhead et al. [2004] and Eggins et al. [2005], although we would note here the following key characteristics of the system:

Table 1. Key Instrument Parameters and Operating Conditions for Laser Ablation U-(Th)-Pb Analysis
Helex ablation system  
   Lambda Physik Compex 110 ArF excimer193 nm 
   Energy density<5 J cm−2 
   Repetition rate5 Hz 
   Spot diameter19 to 71 μm 
   Helium gas flow rate0.25 l min−1 
   Argon gas flow rate1.06 l min−1 
   Effective cell volume∼2 cm2 
Varian 810 Prototype ICPMS  
   RF power1400 W 
   Sheath gas flow rate0.26 l min−1 
   Auxiliary gas flow rate1.85 l min−1 
   Cooling gas flow rate17.5 l min−1 
   Points per peak1 
IsotopeDwell TimeAttenuation
20620 msnone
20730 msnone
20810 msnone
23210 msauto
23810 msauto
Total duration of one mass cycle0.11 s (9 Hz) 

[9] 1. Although the laser cell is large in volume, ablation occurs within a nested microcell with a volume of ∼2 cm3, resulting in high temporal resolution with minimal memory effects (see Woodhead et al. [2007, Figure 1] for an illustration).

[10] 2. The laser optics produce well-defined ablation pits with near-vertical walls [Eggins et al., 1998] and an even energy distribution across the spot.

[11] 3. Ablation occurs in a stratified combination of helium beneath argon within the microcell (compared to mixing of argon and helium downstream of the cell in many other systems). The helium minimizes redeposition of ejecta/condensates, while argon provides efficient sample transport to the ICPMS.

[12] All data reduction was conducted off-line using the freely distributed Iolite data reduction package which runs within the Wavemetrics Igor Pro data analysis software; the reader is referred to Hellstrom et al. [2008] and the Iolite website ( for further details. Backgrounds were measured prior to each ablation with the laser shutter closed and employing identical settings and gas flows to those used during ablation. Data were acquired in batches approximately 1 h in duration, consisting of multiple groups of 5 to 15 sample unknowns bracketed by pairs or triplets of primary and secondary zircon standards. Background intensities were interpolated using a smoothed cubic spline, as were changes in instrumental bias (modeled using downhole fractionation corrected ratios of the zircon standard analyses). Elapsed time since the beginning of sample ablation was used as a proxy for hole depth, with laser-on events calculated by Iolite using an algorithm based on the rate of change in signal intensity, an approach that we have found to be highly reproducible. For further details of Iolite and the U-Th-Pb data reduction scheme we refer the reader to Hellstrom et al. [2008] and Appendix B, respectively.

[13] For each selected time period (e.g., 50 s of data from a spot analysis) the mean and standard error of the measured ratios were calculated, using no outlier rejection for baselines, and a 2 standard error outlier rejection for all other data. All uncertainties are quoted at the 2 sigma level.

[14] All analyzed zircons were mounted in epoxy resin blocks and polished to a 1 μm finish. Each mount was cleaned ultrasonically in ultrapure water after polishing, then cleaned again prior to analysis using AR grade methanol. Prior to each individual analysis in any batch, regions of interest were preablated using several pulses of the laser (this equates to ∼0.2 μm in depth) to remove potential surface contamination, a method we have found to dramatically reduce common Pb contamination that would otherwise affect the first few seconds of analyses. 235U was calculated from 238U using a 238U/235U ratio of 137.88 [Jaffey et al., 1971].

[15] Analyses were routinely examined for spurious data caused by surface Pb contamination, cracks or fractures containing contaminants, areas of Pb loss, etc. This interrogation was achieved using a combination of baseline-subtracted intensities of individual isotopes, raw ratios and corrected ratios. Useful data could not be obtained for 204Pb due to large isobaric Hg interferences derived from the carrier gases.

3. Observations of Downhole Elemental Fractionation

[16] Downhole elemental fractionation is the change in the measured ratios between different elements, and occurs during laser ablation as the hole created by the laser deepens. In general, the signal intensities of refractory elements decrease more rapidly than volatile elements [Longerich et al., 1996], although changes with time can be complex, and other factors also influence fractionation [Eggins et al., 1998]. It has been observed that downhole fractionation does not correlate well with mass, and is better predicted by chemical characteristics such as whether an element is chalcophile or lithophile [Longerich et al., 1996].

[17] The expressions of the downhole fractionation effect vary with parameters such as laser wavelength [Jackson et al., 2004], spot size (Figure 1) [Horn et al., 2000], cell volume, gas flows, and choice of ablation gas (Figure 1) to name but a few. Nevertheless, it is reasonable to assume that the underlying processes contributing to the phenomenon are common to all systems, and that variables such as those noted above simply affect the degree to which each underlying process influences results, and the time during an analysis at which it is most influential. A large number of studies have examined these underlying causes [e.g., Eggins et al., 1998; Hergenröder, 2006a, 2006b; Kosler et al., 2002b, 2005; Kroslakova and Günther, 2007; Longerich et al., 1996], and we do not intend to reiterate them here. However, we would emphasize that these studies have universally demonstrated that downhole fractionation is the result of complex interactions between multiple processes. As such, any attempt to model or predict fractionation should be undertaken with caution, and the validity of any model employed should be carefully tested.

Figure 1.

Examples of observed variability in 206Pb/238U ratios with time (so-called “downhole fractionation”): (a) Approximately linear fractionation patterns which have been observed to vary systematically with spot diameter (reprinted from Horn et al. [2000], copyright 2000, with permission from Elsevier). (b) The effects of changing carrier gas on fractionation patterns. Note that even using He some nonlinear behavior is apparent (reprinted from Jackson et al. [2004], copyright 2004, with permission from Elsevier). (c) An average of 3 separate (baseline-subtracted) spot analyses of the 91500 zircon standard displaying nonlinear fractionation with time. Data were acquired with a 213 nm New Wave laser coupled to an X-Series quadrupole mass spectrometer using He as the carrier gas (B. Kamber, personal communication, 2008). (d) An average of 14 separate (baseline-subtracted) spot analyses of the 91500 zircon standard, again showing a nonlinear change in 206Pb/238U ratio with time. Data were acquired using a 193 nm New Wave laser coupled to an Agilent 7500 quadrupole mass spectrometer using He as the carrier gas (S. Meffre, personal communication, 2008).

[18] Downhole fractionation typically causes an increase in observed Pb/U ratios with depth [Jackson et al., 2004; Kosler and Sylvester, 2003; Tiepolo, 2003], with minor changes in Pb/Th ratios. Although methods used to correct for these effects in U-(Th)-Pb geochronology typically assume that the changes with time are linear, this is not always the case. For example, Figure 1 illustrates that although 206Pb/238U can vary approximately linearly with time (i.e., depth) in some cases (Figures 1a and 1b), curved fractionation patterns often occur to varying degrees (Figures 1c and 1d). Indeed, such nonlinear trends are characteristic of our own analytical system, prompting this study. Furthermore, the steepness and absolute position of the data arrays can also vary depending on the spot size (e.g., Figure 1a) and the carrier gas employed (Figure 1b). Plots illustrating variations in fractionation pattern with spot size (Figure 2) on our own instrument not only further demonstrate that curved trends occur, but that the overall pattern can vary significantly, and is strongly influenced by the aspect ratio of the pit produced. All ratios show a rapid early increase, but the degree of subsequent curvature after ∼10 s results in an overall exponential pattern for 25 and 42 μm spot sizes, and actually produces a negative response in values of the 71 μm spot after ∼40 s. As noted above, the washout/response time of our ablation system is very rapid and the fact that we observe these phenomena so clearly has led us to believe that complex fractionation patterns of this type are the norm but may be masked to some extent in systems with slow ablation cell response times (perhaps producing pseudolinear trends).

Figure 2.

Variations in downhole elemental fractionation with laser spot diameter. Although changes in (baseline subtracted) 206Pb/238U ratio decrease in magnitude with increasing spot size, the pattern becomes increasingly more complex: (a) Fractionation is relatively linear with a 19 μm diameter spot, although the rate of change does decrease detectably with depth. (b) For a 25 μm spot the pattern of fractionation is approximately exponential and is almost flat after 60 s. (c) The pattern of a 42 μm diameter spot begins exponentially but flattens after approximately 30 s. (d) Using a 71 μm diameter spot the curve rapidly flattens and even appears to decrease slightly after 40 s. Note that surficial common Pb contamination, which will cause an increase in 206Pb/238U ratios early in the analysis, does not appear to have affected results. Each ablation lasted 60 s, consisted of 300 pulses, and created a pit approximately 18–20 μm deep.

[19] To understand the fractionation patterns of elemental ratios, it is worth first examining the (baseline subtracted) signal intensities used in ratio calculation. Figure 3 illustrates a combination of data from 6 ablations of the 91500 zircon standard, each of 60 s duration. The original data (pale colors) were combined to produce an average for each time segment (black line). Figures 3a3c show the baseline subtracted beam intensities of U, Th and Pb and indicate that decay is not exponential (red line), as might be expected. Instead, more complex patterns are observed, and significant differences in response are apparent between elements. For each element, signals after ∼25 s appear to decay exponentially, but prior to this each maintains a steadier intensity than the exponential curve, suggesting the operation of multiple superimposed phenomena (e.g., gas flows, condensation conditions). This effect is most apparent for Pb and Th, which can deviate from the fitted exponential curve by over 10%, whereas the effect is more subtle for U (at least in our system). Correspondingly, the overall decrease in signal intensity is high for U (approximately 55%), in comparison to a ∼45% decrease for Th and Pb.

Figure 3.

Data for six separate (baseline subtracted) ablations of the 91500 zircon standard (pale shades) and the average (black lines). Model exponential curves are shown in red. The baseline-subtracted beam intensities for (a) U, (b) Th, and (c) Pb. (d) The complex changes in intensity observed for Pb and U generate 206Pb/238U ratios that are nonlinear, perhaps exponential although some sinusoidal behavior is apparent. (e) Despite the variations illustrated in Figures 3b and 3c, calculated 208Pb/232Th ratios appear to decrease linearly with time. (f) A repeat of the experiment, with five separate ablations using a spot diameter of 71 μm, illustrates the complex changes in fractionation patterns with this change in spot diameter. The 206Pb/238U ratios produced here decrease after ∼45 s, and the variations are not well modeled by an exponential curve.

[20] Based on these observations, it is not surprising that the resulting 206Pb/238U ratio (Figure 3d) varies substantially, and in this case appears to closely follow an exponential curve (in red). The same test conducted with 5 ablations using a 71 μm spot (Figure 3f), however, shows a more complex response that cannot be modeled by a simple exponential curve. Interestingly, although signal intensities of both Th and Pb (Figures 3b and 3c, respectively) are relatively complex at 42 μm, the resulting 208Pb/232Th ratio (Figure 3e) is relatively stable over 60 s, decreasing steadily with time. This suggests that fractionation effects are well synchronized between these elements in both timing and magnitude, despite their complex.

4. Existing Methods for the Correction of Downhole Elemental Fractionation and Their Limitations

[21] Methods for the correction of downhole elemental fractionation can be subdivided according to whether standard sample bracketing is employed in modeling downhole fractionation patterns. Those that use standard sample bracketing assume that standards and unknowns will behave identically during ablation, and that the characteristics of downhole fractionation in the standard can be used to model its effects in unknowns, whereas other methods are independent of this assumption. Each of these categories is considered separately below. Note that this subdivision refers specifically to treatment of downhole fractionation, and not to the correction of long-term instrumental bias, which may also be corrected using standard sample bracketing.

4.1. Methods Without Standard Sample Bracketing

[22] Horn et al. [2000] detailed an empirical method for the correction of fractionation effects, based on the observation that elemental ratios have a linear relationship to hole depth (for a given laser spot diameter and energy density (Figure 1a)). Because these observations were highly reproducible using their system, they generated an empirical formula describing the relationship between pit diameter and the slope of the fractionation trend for a given energy density. Using this formula they individually corrected each time slice of the data for downhole fractionation (Figure 4a), then calculated the mean and standard deviation of the ratio from these corrected data points. The effects of instrumental bias were corrected separately using simultaneous nebulization of a Tl/U tracer solution. Although this approach has the potential to accurately correct downhole fractionation, it relies on the stability of fractionation patterns between analytical sessions, and thus requires highly reproducible operating conditions.

Figure 4.

A schematic illustration of various existing methods of downhole fractionation correction. (a) An empirically derived linear equation is used to correct fractionation in each time slice of the analysis [Horn et al., 2000], producing time-resolved downhole corrected ratios. (b) A least squares fit is applied to the data of a single spot analysis to estimate the rate of downhole fractionation with depth, with the y intercept and uncertainty in the fit providing the average and uncertainty, respectively. This method produces a different gradient for each spot analysis and is thus independent of changes in fractionation caused by spot size or matrix effects between analyses [Kosler et al., 2002a]. (c) Individual time slices of each analysis are corrected using corresponding time slices of the standard(s) [Jackson et al., 2004]. Depending on the duration of each time slice, some or all of the temporal resolution may be sacrificed.

[23] To avoid these constraints, Kosler et al. [2002a] proposed a method that does not require all analyses to have the same fractionation pattern. Instead, the data for each laser pit are treated separately, and are corrected using a least squares linear fit (Figure 4b) of the elemental ratio over time. Using this approach, the derived y intercept and its uncertainty provide the corrected ratio and its precision, respectively. Like Horn et al. [2000], instrumental bias was corrected separately via simultaneous nebulization of a Tl/U tracer solution, although other studies have demonstrated that instrumental bias can be accounted for by normalization to standard zircons analyzed in the same analytical session [Chang et al., 2006; Gehrels et al., 2008], or by a combination of these two methods [Klotzli et al., 2009]. The method allows for differences in elemental fractionation behavior between analytical sessions, or between individual analyses due, for example, to matrix-related effects. However, this flexibility also means that it may not distinguish between some cases of real sample variability (e.g., gradual transition into a growth zone of different age) and the effects of fractionation. In addition, the lack of time-resolved corrected ratios can make it difficult to detect the effects of fractures, inclusions, etc. on results.

4.2. Methods Employing Standard Sample Bracketing

[24] The most common method of standard sample bracketing is to correct the sample analysis using a corresponding time interval (for example, relative to “laser on”) from a neighboring standard analysis, or the pooled data of multiple standards [e.g., Jackson et al., 2004; Van Achterberg et al., 2001]. In this way, the effects of downhole fractionation are accounted for (Figure 4c), without any requirement to observe or model fractionation behavior. However, because this method takes the average ratio of a time interval in the standard(s) to correct unknowns, the method often reduces the temporal resolution of the data and thus also reduces feedback regarding the validity of the correction, lowering the probability of detecting heterogeneities such as growth zoning, fractures, or inclusions.

[25] A similar alternative is the “total counts” approach, in which all ions measured during an ablation are treated together [Johnston et al., 2009]. Although this approach, which was developed specifically for small-volume sampling, is capable of modeling nonlinear fractionation patterns it does assume that fractionation is identical in standards and unknowns, and cannot generate time-resolved corrected ratios. As such, it may be difficult to assess the validity of the correction used, or to detect downhole age variability, inclusions, etc.

5. Modeling Complex Downhole Elemental Fractionation

[26] Each of the methods outlined above clearly have positive and negative aspects, but they generally rely upon linear variations in elemental ratios with time, which as illustrated above (Figures 2 and 3) may not always be the case. So how can we best retain spatial and temporal resolution, while accommodating more complex fractionation patterns?

[27] Philosophically, we believe this can best be achieved by moving away from the tendency of existing methods to rely on fitting data to a presupposed model of fractionation, and instead to first observe the effects of downhole fractionation, then fit an appropriate model to the data, whatever form it may take. To do this, it is not necessary to understand the causes of elemental fractionation, which are clearly both multiple and complex, but only to model their combined effects on elemental ratios. In taking this approach, we implement a standard sample bracketing methodology, using observations of the group of standard analyses within an analytical session to model the fractionation pattern in unknowns. Although this method is free from the extreme reliance on stability of the empirical approach of Horn et al. [2000], it does require careful verification of the assumption that standards and unknowns behave identically. Evidence for the validity of this assumption is provided in Appendix A. We do not employ the simultaneous nebulization of a Tl/U tracer solution, and as such do not discriminate between instrumental bias generated within the mass spectrometer and the laser sampling system. We correct for instrumental bias by normalization to standard zircons analyzed in the same analytical session. Like Gehrels et al. [2008], we use all standard analyses of the session to determine variations in the degree of instrumental bias. We model this variability using a smoothed cubic spline, although a number of other options are available in Iolite (e.g., linear interpolation, average of the session).

[28] As a starting point for this study, and as a result of our observations of curved fractionation trends under a range of different analytical conditions (Figures 1c, 1d, and 2d), we employ a model fitting an exponential curve (with the equation y = a + b.exp−cx) to the changes in elemental ratios with time. To evaluate this model we chose an analytical session of approximately 1 h duration, containing a number of 42 μm diameter spot analyses each of ∼50 s length. Nine analyses of the 91500 zircon standard spaced throughout the run were included for standard sample bracketing purposes. To test the efficacy of an exponential curve fit we combined all standard analyses for the session to produce an average 206Pb/238U ratio versus ablation time plot (Figure 5a). This average is more representative of the effects of fractionation than any single standard and has the additional benefits of reducing scatter in the pattern, and thus allowing the calculation of an uncertainty for each time slice of the average. Because we are only interested in the relative change in the ratio with ablation depth, longer-term instrumental drift does not affect the result, and can be corrected separately after downhole fractionation correction (provided, of course, that no drift in the pattern of downhole fractionation occurs). The exponential equation was fit to this average pattern using Igor Pro's built-in curve fitting function, which incorporates calculated uncertainties on each time slice of the average, and iteratively produces a fit that minimizes chi-square using the Levenberg-Marquardt algorithm. Figure 5b, which illustrates 206Pb/238U ratios of the 91500 zircon after correction using the exponential equation derived in Figure 5a, demonstrates the effectiveness of the model, with corrected ratios showing no observable variability with ablation time.

Figure 5.

Data for nine 60 s ablations of the 91500 zircon standard using a 42 μm diameter spot. Individual baseline-subtracted data (pale colors) are combined to produce an average ratio for each time segment (black line), which were then used to fit model curves (red line). (a) Raw 206Pb/238U variations and an exponential curve fit. (b) Corrected 206Pb/238U ratios show no variability with time, indicating that the fitted exponential curve is a suitable model for downhole elemental fractionation. (c) The 206Pb/238U ratios of three Temora-2 analyses from the same analytical session corrected using the same exponential curve equation. The weighted average of the analyses (calculated using Isoplot [Ludwig, 2001]) of 0.0666 ± 0.0026 is well within error of the accepted value of ∼0.0668 [Black et al., 2004].

[29] In order to objectively test the applicability of the fractionation model, three Temora-2 zircon grains analyzed in the same session were used as secondary standards. Again, 206Pb/238U ratios corrected using the same model exhibit no discernable variation with time (Figure 5c), and the weighted average of the ratios (corrected for instrumental drift using the 91500 zircon analyses) of 0.0666 ± 0.0026 is indistinguishable from the “true” 206Pb/238U ratio of ∼0.0668 [Black et al., 2004]. A detailed comparison of the effectiveness of linear and exponential models, including tabulated data, is provided in Appendix A.

[30] The success of a model incorporating an exponential curve fit is encouraging, but there is no a priori reason for downhole fractionation to follow a “simple” pattern (such as linear, or exponential, changes with ablation time). Indeed, the data in Figure 2d and Figure 3f cannot be satisfactorily modeled using an exponential curve, and are excellent examples of cases that would benefit from a more versatile approach. To this end, we extended the above method by using a smoothed cubic spline fit, which should be capable of reproducing any observed downhole fractionation trend. The approach is essentially the same as that described above, but instead of an exponential curve a smoothed cubic spline was fit to the data using a built-in Igor Pro function called “Interpolate2.”

[31] Although long ablation times are uncommon in routine U-Pb zircon dating by laser ablation, since drill rates are generally relatively fast, we employed 55 μm diameter spot analyses of 2 min duration in order to exhaustively test the potential of this approach. As with the exponential curve modeling described above, pairs or triplets of the 91500 zircon standard were spaced evenly throughout the experiment, and data from these multiple analyses were combined to produce an average pattern of downhole fractionation (Figure 6a) for the session. Clearly, changes in fractionation pattern with ablation time are complex, and simple models employing a linear or exponential fit would be incapable of effectively modeling these variations. To produce a more appropriate model, a smoothed cubic spline was calculated from the average pattern of the standards. This spline accurately models the fractionation response observed in the standard analyses and, when used to correct fractionation in the standards (Figure 6b), produces corrected ratios that do not vary with ablation time (in contrast, if a simple linear model is employed the average of the standards varies by up to 8% with ablation time).

Figure 6.

Ten 2 min spot analyses (colored lines) of the 91500 zircon standard using a 55 μm diameter spot, combined to produce an average (black line). (a) The 206Pb/238U ratios with a model of the fractionation pattern generated by fitting a smoothed cubic spline to this average wave (red line). (b) Corrected 206Pb/238U ratios illustrating the appropriateness of the chosen model. (c) Six analyses of the Temora-2 zircon analyzed in the same session in order to independently assess the approach. Raw 206Pb/238U ratios show very similar fractionation patterns to the 91500 zircon. (d) Corrected data using the smoothed cubic spline generated from 91500 analyses. (e) When combined, the six Temora-2 analyses yield a concordia age well within 1% of the accepted age of 416.8 Ma [Black et al., 2004]. (f) Repeat of the experiment using a 42 μm diameter spot. A total of eight Temora-2 analyses, bracketed by 12 ablations of the 91500 zircon standard, produces a concordia age of 413.5 ± 2.7 Ma, again well within 1% of the accepted age.

[32] The applicability of the model to other zircons was then tested using analyses of the Temora-2 zircon as a secondary standard. Despite the long (2 min) duration of analyses, the six spot analyses for the session have a fractionation pattern (Figure 6c) very similar to 91500. When corrected using the smoothed cubic spline calculated from the average 91500 pattern (Figure 6a), 206Pb/238U ratios of the Temora-2 analyses do not vary with hole depth (Figure 6d), indicating the validity of the fractionation correction employed. When combined to generate a concordia age, the six Temora-2 analyses yield a concordant result of 415.0 ± 2.7 Ma (Figure 6e), which is statistically indistinguishable from the accepted TIMS age for Temora-2 of 416.8 ± 0.3 Ma [Black et al., 2004].

[33] This experiment was repeated using a spot size of 42 μm, with a total of 8 Temora-2 analyses for the session, bracketed by 12 ablations of the 91500 “calibration” zircon (all analyses were again of 2 min duration). A similar downhole fractionation pattern was observed (illustrated in Figure 7b), although the overall change in elemental ratio was ∼30%, in comparison to a variation of ∼20% in the earlier experiment (Figure 6a). Once again, a smoothed cubic spline, calculated from the average downhole pattern of the 91500 zircon standard, was used to correct all analyses. After correction of downhole fractionation and instrumental drift, the 8 spot analyses of the Temora-2 standard yielded a concordia age of 412.9 ± 2.4 Ma, which is within 1% of the accepted age. We therefore conclude that even the more complex patterns of downhole fractionation remain relatively constant throughout an analytical session (and at a given spot size), and that this behavior can be modeled and then applied to unknowns in the same session providing a robust correction for downhole effects.

Figure 7.

(a) Schematic illustration of a zircon sandwich employing polished slices of the 91500 (blue) and Plesovice (green, ∼10 μm thick) zircon standards. Ages and U concentrations for the Plesovice and 91500 standards are from Slama et al. [2008] and Wiedenbeck et al. [1995], respectively. (b) Uncorrected (but baseline-subtracted) 206Pb/238U ratios plotted against time. A total of 12 spot analyses of the 91500 zircon (pale colored lines) was combined to produce an average pattern of downhole fractionation (black line). This average was used to model downhole fractionation for the session using a smoothed cubic spline (red). (c) Results of a 2 min spot ablation of the zircon sandwich, plotted against time (in seconds). The red trace shows 206Pb/238U ratios corrected for downhole fractionation and instrumental drift, and the blue, orange, and green traces show changes in baseline-subtracted intensities of 238U, 206Pb, and 208Pb (linear scaling). See section 6 for details.

6. Depth Profiling as an Example Application of the Method

[34] One immediate benefit resulting from accurate correction of downhole elemental fractionation is the potential to resolve downhole age variation in complex zircons. To investigate this possibility we simulated the effects of age zoning within a natural zircon grain by bonding together polished wafers of the Plesovice zircon standard (∼337 Ma [Slama et al., 2008]) and the 91500 zircon standard (∼1063 [Wiedenbeck et al., 1995]). We then created a depth profile of this zoned sample by ablating through the ∼10 μm thick Plesovice layer into the underlying 91500 grain (Figure 7a). By employing a spot ablation of 2 min duration the laser “drilled” to a total depth of ∼40 μm, sufficient to sample both of the zircon standards. This experiment can be considered a worst case scenario because the Plesovice zircon has a U concentration more than 5 times greater than the 91500 zircon (465 ppm, compared to 81 ppm), so any contamination of the 91500 portion of the signal by Plesovice (due to memory effects or ablation of the pit walls) would be amplified by the greater U concentration of the latter.

[35] The depth profiling test was conducted within the same analytical session as the second batch of Temora-2 standards described above (see Figure 6f), using a 42 μm laser spot, and a duration of 2 min per ablation (equating to pit depths of ∼40 μm). The smoothed cubic spline fit to the average of 12 analyses of the 91500 zircon (Figure 7b) described above was used to correct all analyses for downhole fractionation. Figure 7c illustrates the 206Pb/238U ratios in the depth profile after correction for downhole fractionation and instrumental drift, together with relevant baseline-subtracted beam intensities. The first 30 s of the analysis samples only the Plesovice wafer, and yields a concordia age of 334.2 ± 8.7 Ma, statistically indistinguishable from the accepted age. The following ∼20 s of the analysis represent ablation through the wafer boundary/epoxy, and coincides with a noticeable increase in 208Pb due to unavoidable common Pb contamination in the boundary layer. By ∼70 s the elemental ratio reaches a plateau, and a concordia age of the following 40 s yields an apparent age of 1052.4 ± 9.1 Ma. We attribute the offset between this apparent age and the accepted 206Pb/238U age for 91500 of 1065 Ma to minor sampling of the Plesovice wafer at the pit walls (note that the rapid wash out of the Helex cell means that memory effects are unlikely to have affected the result). Based on the published U concentrations and 206Pb/238U ratios of each zircon, we calculate this offset to represent a ∼0.3% contamination of the 91500 result by Plesovice. Given that this is well within the normal reported uncertainties of U-Pb zircon dating of 1 to 4% [Jackson et al., 2004; Klotzli et al., 2009; Kosler and Sylvester, 2003], such contamination is unlikely to significantly perturb depth-profiling results for natural samples.

[36] The success of this depth profiling example clearly demonstrates the benefit of employing versatile downhole fractionation models, which produced accurate ages, despite long ablation times and an unusually complex fractionation pattern.

7. Quantifying the Uncertainty of Complex Fractionation Models

[37] Although the methods that we have described here for the modeling and correction of downhole elemental fractionation do not readily lend themselves to a strict arithmetic propagation of uncertainties, we suggest here an alternative and robust methodology employing analyses of the primary standard within the session to estimate the propagated uncertainty of individual analyses. This approach has the advantage of being independent of the downhole fractionation model employed, and inherently propagates most sources of analytical uncertainty for the session. However, because it assumes that the primary standard has identical behavior to unknowns it cannot be used to determine whether differences in zircon matrix, age or U content can affect the ages produced. As such, secondary zircon standards are still required to assess the overall accuracy and reproducibility of analyses.

[38] Our method for the estimation of analytical uncertainties is similar to the approach conventionally used to assess external reproducibility using secondary standards, with two significant differences:

[39] 1. In lieu of a secondary standard, we remove each primary standard ablation in turn from the “pool” of primary standard analyses and recalculate its corrected ratios independently of the primary standard pool. By removing this analysis from the standard data used to normalize results, it is corrected for downhole elemental fractionation and instrumental drift in a manner identical to the treatment of “unknowns” (Figures 8a8f). After sequentially treating each standard analysis as an unknown in this way, a pool of “pseudosecondary standards” can be generated, and these can be used to assess the analytical uncertainty.

Figure 8.

Illustration of the error propagation procedure. (a–f) The basis of the method, using five fictitious standard analyses as an example. Figures 8a and 8d show calculated values for the five standards prior to treatment, and instrumental drift modeled by cubic splines with and without smoothing. In Figures 8b and 8e, an individual standard analysis is removed from the group of standards, allowing it to be treated as an “unknown” (the second standard analysis in this case), after recalculation of instrumental drift using the remaining standards. In this way, a pool of “pseudosecondary standards” is generated (Figures 8c and 8f), with the scatter in this pool reflecting uncertainties in both drift correction and downhole fractionation correction. In this case, the smoother cubic spline employed in case 2 produces less scatter and is thus a better model of instrumental drift. (g and h) Calculation of excess uncertainty for a session. Figure 8g shows a pool of 15 pseudosecondary standards, with associated internal errors. An MSWD of ∼3.2 for the pool indicates that additional sources of uncertainty exist. To estimate this excess uncertainty, an increasingly larger value (dashed error bar in Figure 8h) is combined (in quadrature) with the internal uncertainty of each analysis, until the population has an MSWD of 1. The combined uncertainties account for all scatter in the population, meaning that all sources of excess uncertainty (represented by the calculated value) have been accounted for. This value is then propagated with the internal uncertainties of all analyses for the session. The 3rd and 4th values illustrate how excess uncertainty has less impact on imprecise results and that total uncertainties vary between individual spots, reflecting both internal and excess uncertainties.

[40] 2. Instead of estimating a global uncertainty for the entire method, which is then assigned uniformly to all analyses, we generate an “excess uncertainty” for each analytical session that is intended to account for all unquantified sources of analytical uncertainty, then combine this with the internal precision derived for each spot analysis. To estimate the magnitude of this excess analytical uncertainty, we calculate the degree of scatter in the pool of “pseudosecondary standards” (Figures 8g and 8h). If the internal uncertainties of the pseudosecondary standards are insufficient to account for the scatter between analyses, the group will have an MSWD (mean of the squared weighted deviates) of greater than 1, indicating that an additional source of uncertainty exists in the population. This excess error (predominantly associated with downhole fractionation correction and drift correction) can be estimated by calculating the additional uncertainty for each analysis required to produce an MSWD of 1 for the pseudosecondary standards (Figures 8g and 8h). By combining in quadrature this excess uncertainty with the internal error of individual spot analyses, a total error for each analysis is generated. This approach then takes into account differences in internal error between samples, and thus best reflects the actual uncertainty of individual spot analyses.

[41] The uncertainties generated in this manner will reflect the limitations of downhole fractionation correction, as any variation in fractionation between spot ablations will be reflected in the scatter of corrected ratios. Likewise, the use of an inappropriate model for changes in the ratio with hole depth will contribute additional scatter to the “pseudosecondary standard” pool, and will result in a larger calculated excess uncertainty. In a similar manner, any error in the modeling of instrumental drift will be reflected by a larger scatter in corrected ratios of the “pseudosecondary standards,” again increasing the excess uncertainty required to produce an MSWD of 1.

[42] Although this method has a significant processing burden, the capabilities of current computers are more than sufficient for the automatic calculation of uncertainties in this fashion, and errors for all corrected elemental and isotopic ratios in Iolite's U-(Th)-Pb dating module are treated in this way. The uncertainties of all corrected ratios reported here were calculated using this method, including the Temora-2 concordia ages of Figures 6e and 6f.

[43] Despite the capacity of this method to produce accurate estimates of analytical uncertainty, however, we note the following caveats:

[44] 1. Any differences in U concentration and/or age between the primary standard and sample unknowns may alter the relative impact of uncertainties in the subtraction of background noise (i.e., signal-to-noise ratio), and counting statistics.

[45] 2. Analyses of the reference standard should be evenly spaced throughout the run to best reflect the effects of instrumental drift correction on unknowns.

[46] 3. To reliably quantify the “excess uncertainty” for a session, a suitable number of analyses of the primary standard are required. In Iolite's U-Pb package, if too few standard analyses are available the potential for underestimation of the excess uncertainty is catered for by increasing the calculated value gradually, from no increase for 15 or more standards, to a factor of two for 6 standards. If less than 6 standards are used, the software will still allow the user to produce results, but a conservatively large excess uncertainty will be applied to each corrected ratio.

[47] 4. The user is of course always encouraged to employ a number of secondary standards during analytical routines, as this is the only way to assess the external reproducibility of results. The method described here is simply intended to estimate realistic uncertainties for individual spot analysis in a robust, reproducible and objective way.

7.1. A Note on Internal Versus Systematic Uncertainties

[48] In considering uncertainties and their propagation it is important to distinguish between “random” and “systematic” sources of uncertainty. Any systematic uncertainty is one which generates a bias in a data set, an obvious example for U-Th-Pb geochronology is the uncertainty in the known age of a reference standard. Such systematic uncertainties must be treated differently when generating a weighted average from a group of individual analyses (e.g., a group of unknown zircons from a single igneous rock sample). If the systematic component is propagated into each individual uncertainty prior to the generation of the weighted average the result will have an unrealistically small estimated uncertainty. This is because the systematic uncertainty has been treated as though it were random, and will have been reduced in the weighted average process (by the square root of the number of analyses in the weighted average). Instead, such systematic errors must be kept separate from the weighted average calculation, then propagated into the uncertainty afterward. Differences in the U concentration or age of zircon standards and unknowns will also contribute a systematic uncertainty to analyses.

[49] The “excess uncertainty” generated using the pseudosecondary standard approach described above is unable to detect systematic uncertainties, which would bias the group of analyses without introducing any additional scatter to the population. Because the pseudosecondary standard approach only considers data scatter, and not accuracy, such biases will not affect the MSWD, and thus will not be incorporated into excess uncertainty calculations. As such, any excess uncertainty generated using the method must be random, and can therefore be propagated with other uncertainties (e.g., internal precision) prior to any weighted average calculation.

7.2. Correlation in Uncertainties

[50] In addition to the magnitude of uncertainties, the accurate estimation of “error correlation” is also of importance. Although equations exist for the calculation of error correlations based on the uncertainties of bulk analyses, which are often used to great effect in single zircon TIMS dating [see, e.g., Schmitz and Schoene, 2007], the availability of large numbers of individual integrations (“scans”) in laser ablation methods offers the opportunity to employ calculations based upon the raw data populations themselves. To calculate error correlations from the data, all individual time slices for the chosen interval of the sample are combined (for example, if 40 s of data were chosen for a spot analysis, with a data acquisition rate of 8 cycles per second, this would represent a population of 320 individual data points). When the two ratios of interest (e.g., corrected 206Pb/238U versus corrected 207Pb/235U) are plotted on an X-Y diagram, the degree of correlation in the data is identical to the error correlation between the ratios. When calculated in this way, error correlations between individual analyses of a single sample can vary significantly (e.g., error correlations of spot analyses in Figure 6e vary from 0.13 to 0.31, and from 0.09 to 0.29 in Figure 6f), and error correlations themselves vary dramatically with changes in U concentration and age.

8. Conclusions

[51] U-(Th)-Pb zircon geochronology by laser ablation ICPMS is both rapid and relatively inexpensive, and has already become an extremely popular and widespread method. At present, the single largest constraint on the accuracy and precision of the technique is the correction of downhole elemental fractionation. Patterns of fractionation with hole depth can vary dramatically both between laboratories, and with changes in operating conditions, and there is no a priori reason for them to be linear. In fact, nonlinear patterns of varying complexity have been observed in multiple laboratories.

[52] We suggest that as a general approach, users should attempt to develop an appropriate model of downhole fractionation based upon observations of their own data during each analytical session, instead of attempting to fit the data to a preconceived fractionation model. Employing this strategy, we have demonstrated that models of nonlinear fractionation, such as an exponential curve, or smoothed cubic splines, can be used to efficiently correct for the effects of complex downhole fractionation. These models are capable of producing high-quality ages, accurate to within 1% of accepted values, and can be used for demanding applications, such as depth profiling, without compromising data quality.

[53] Careful attention should be given to the propagation of analytical uncertainties, particularly in relation to downhole fractionation and instrumental drift correction. We provide a method that allows the estimation of analytical uncertainties, using only a single reference standard, in a manner that best reflects the actual uncertainties of individual spot analyses.

[54] The methodology employed here, including uncertainty propagation, error correlation, is incorporated into the U-(Th)-Pb dating module of the Iolite software package. Further information for Iolite, a freeware program designed for the reduction of time-resolved data, is available from

Appendix A:: A Comparison of Linear and Exponential Models of Downhole Fractionation

[55] In Appendix A, we attempt to address in greater detail the potential age effects of using different models of downhole elemental fractionation on zircon data generated by LA-ICPMS.

[56] In theory, if identical sampling conditions are used for matrix-matched standards and unknowns, the use of a standard bracketing approach should always produce accurate ages, inherently corrected for all sources of bias, including downhole fractionation. There are several reasons why this is not the case in practice, but the most significant is that the user is often unable to sample exactly the same period of data for each analysis. Thus, any variations in corrected elemental ratios with ablation time (as a proxy for hole depth) have an impact on the final calculated age. Common reasons for wishing to use only a portion of the entire analysis include (1) contamination of the grain surface by common Pb; (2) insufficient grain thickness, meaning that the laser “drills” through the back of the zircon grain and begins sampling epoxy/other minerals in a thin section; and (3) penetrating through one zone of a grain into a region of different age.

[57] Here we compare linear and exponential fractionation models on a group of 10 Temora-2 grains, analyzed as a secondary standard during a normal analytical session. The downhole fractionation was modeled using 13 analyses of the 91500 zircon, grouped in pairs or triplets throughout the session which also included unknowns in addition to the Temora secondary standard. The normal data reduction methods of Iolite's U-Th-Pb DRS (described in Appendix B) were used, but in order to assess variability between analyses the propagation of excess uncertainties was avoided. As such, all uncertainties quoted are likely to be approximately half of their propagated values.

[58] When fitted to the average of the 91500 analyses, there are clear differences between the linear and exponential models of fractionation (Figures A1 and A2). This is well illustrated by the residuals to the fit, which are a proxy for the effect of the fractionation model on corrected ratios. The linear model produces strong biases in the residuals that vary with ablation time (i.e., hole depth), undercorrecting the early and late portions of each ablation, and overcorrecting the period between ∼10 and 40 s (an overcorrection will result in higher corrected ratios and older apparent ages, and vice versa). The bias in the correction also results in a nonnormal distribution in the scatter of data points within each analysis, as illustrated by the histogram in Figure A1, which is visibly skewed. In contrast, the residuals of the exponential fit (Figure A2) do not vary with ablation time and exhibit no visible bias, resulting in a normal “bell curve” distribution of the corrected data (histogram plot in Figure A2). The standard error of the exponential model fit (2.9) is also significantly lower than that of the linear fit (4.7), again suggesting that the exponential curve is a better model in this case.

Figure A1.

Linear fractionation model (screen capture from Iolite's U-Th-Pb DRS). The model was generated using a least squares linear fit through the average (black) values of the 91500 zircon. The resulting fit (red line) is not an accurate model of changes in the ratio with ablation time (x axis) and generates significant bias in the resulting corrected ratios, illustrated here using the residuals of the fit to the average. The first and last 10 s of analyses are undercorrected by up to 15%, while the range between 10 and 40 s since shutter open are overcorrected by ∼5%. When the residuals to the fit are plotted as a histogram, it can also be seen that the model results in a skewed data distribution that may affect statistical methods that rely on normally distributed data (e.g., mean and standard deviation).

Figure A2.

Exponential fractionation model (screen capture from Iolite's U-Th-Pb DRS). The model was generated by fitting an exponential curve (equation in blue) through the average (black) values of the 91500 zircon. The resulting fit (red line) effectively models changes in the ratio with ablation time (x axis), producing corrected ratios with no detectable bias with ablation time and a normal “bell curve” distribution when plotted as a histogram.

[59] To test the effects of each of these models on the calculated ages of samples, the 10 Temora-2 analyses were reduced using each method. For each spot analysis, three separate sections of the data were processed: (1) the entire spot analysis, (2) the analysis minus the first 10 s (to simulate surface contamination), and (3) the analysis minus the last 10 s (to simulate drilling through the entire grain, or into a region of different age).

[60] The results of entire spot analyses corrected using each model (Tables A1a and A1b) are on average different in age (418.1 Ma for exponential versus 419.0 Ma for linear), but the disparity is slight, and although the bias may be real (e.g., due to a skewed data distribution), it may also be due to slight differences in drift correction between the models. In contrast, the effect of excluding portions of each analysis is distinctly different for the two models, and cannot be related to drift correction. The average age of analyses reduced using the exponential model is not significantly affected by excluding portions of each analysis, suggesting that no bias exists in the model. However, ages generated from different portions of each analysis using the linear model exhibit significant differences, with the average of analyses where the first 10 s of data were excluded having an average age 1.8 Ma (0.43%) older than the average of entire analyses. Similarly, the ages of analyses where the last 10 s were excluded are on average 3.3 Ma (0.79%) older than the average of entire analyses. These results are in agreement with observations based on Figure A1, and illustrate that at least in this case, a linear model has the potential to introduce significant bias in calculated ages. Individual internal uncertainties in calculated ratios are also marginally (5 to 10%) larger for analyses reduced using the linear model, but this effect is less critical than the apparent impact on the accuracy of results.

Table A1a. Results of Entire Spot Analyses Corrected Using the Exponential Modela
 Duration (s)Final 207/235 ± 2 SEFinal 206/238 ± 2 SERhoConcordia Age ± 2 SE (Ma)Weighted Average Age ± 2 SE (Ma)Average Age (Ma)
  • a

    Concordia ages were calculated using the “Concordia” function of Isoplot. Associated uncertainties are extrapolated from the 206/238 uncertainty and are typically ∼10% larger than those calculated by Isoplot. Final 207/235 and final 206/238 ratios have been corrected for downhole fractionation using the relevant model, then corrected for instrumental drift using a smoothed cubic spline interpolated through the 91500 analyses. Weighted averages were calculated using Isoplot′s “weighted average” routine, using the individual concordia ages and uncertainties displayed. Normal average values, calculated without uncertainties, are also provided for comparison. Rho is the correlation in the uncertainties of the 206/238 and 207/235 ratios, calculated for each analysis individually from scatter in the corrected ratios.

Entire ablation55.10.513591 ± 0.0093980.066663 ± 0.0004710.129416.7 ± 2.7  
 52.60.519286 ± 0.0086200.065604 ± 0.0004800.185411.8 ± 2.8  
 54.40.507642 ± 0.0087270.068065 ± 0.0005090.267423.4 ± 3.0  
 53.80.504325 ± 0.0082820.066641 ± 0.0005610.203415.6 ± 3.1  
 54.00.513888 ± 0.0066710.067134 ± 0.0004070.179419.3 ± 2.3  
 54.90.513290 ± 0.0088860.066550 ± 0.0005500.163416.4 ± 3.1  
 53.60.512468 ± 0.0068110.067232 ± 0.0005100.164419.6 ± 2.7  
 51.00.507560 ± 0.0046920.067265 ± 0.0003990.365418.8 ± 2.2  
 51.00.506765 ± 0.0065470.066916 ± 0.0004320.246417.3 ± 2.5  
 52.60.510582 ± 0.0069670.067696 ± 0.0005120.242421.4 ± 2.8418.1 ± 2.2418.0
Missing first 10 s45.00.511669 ± 0.0107340.066542 ± 0.0005110.146415.8 ± 3.0  
 45.60.514807 ± 0.0093300.065609 ± 0.0005130.179411.4 ± 3.0  
 44.60.511851 ± 0.0076460.067068 ± 0.0004120.110418.7 ± 2.3  
 44.50.514009 ± 0.0100540.066705 ± 0.0005790.162417.1 ± 3.3  
 44.90.509900 ± 0.0078670.067160 ± 0.0005120.066418.9 ± 2.7  
 44.90.507842 ± 0.0097030.068311 ± 0.0005500.263424.8 ± 3.2  
 44.60.503275 ± 0.0093250.066785 ± 0.0005890.186416.2 ± 3.3  
 45.00.507076 ± 0.0076240.067724 ± 0.0005380.216421.0 ± 3.0  
 45.20.506491 ± 0.0069710.066719 ± 0.0004480.206416.3 ± 2.5  
 45.00.506636 ± 0.0049700.067047 ± 0.0004020.324417.7 ± 2.2417.7 ± 2.4417.8
Missing last 10 s45.10.502767 ± 0.0086930.066052 ± 0.0006170.207412.6 ± 3.4  
 45.20.507414 ± 0.0092090.067583 ± 0.0005420.196420.8 ± 3.1  
 42.60.511756 ± 0.0072480.067225 ± 0.0005330.170419.5 ± 2.9  
 45.00.511299 ± 0.0096090.066357 ± 0.0006080.177415.2 ± 3.4  
 45.10.514664 ± 0.0067410.067346 ± 0.0004440.177420.5 ± 2.5  
 45.00.521583 ± 0.0090740.066222 ± 0.0005350.209415.4 ± 3.1  
 45.00.512666 ± 0.0100840.066559 ± 0.0005190.113416.1 ± 3.0  
 41.90.514156 ± 0.0075210.067859 ± 0.0005740.269422.7 ± 3.2  
 40.90.506302 ± 0.0071250.066878 ± 0.0004780.267417.1 ± 2.7  
 41.40.510783 ± 0.0048600.067281 ± 0.0004520.404419.5 ± 2.5418.2 ± 2.1417.9
Table A1b. Results of Entire Spot Analyses Corrected Using the Linear Modela
 Duration (s)Final 207/235 ± 2 SEFinal 206/238 ± 2 SERhoConcordia Age ± 2 SE (Ma)Weighted Average Age ± 2 SE (Ma)Average Age (Ma)
  • a

    Concordia ages were calculated using the “Concordia” function of Isoplot. Associated uncertainties are extrapolated from the 206/238 uncertainty and are typically ∼10% larger than those calculated by Isoplot. Final 207/235 and final 206/238 ratios have been corrected for downhole fractionation using the relevant model, then corrected for instrumental drift using a smoothed cubic spline interpolated through the 91500 analyses. Weighted averages were calculated using Isoplot′s “weighted average” routine, using the individual concordia ages and uncertainties displayed. Normal average values, calculated without uncertainties, are also provided for comparison. Rho is the correlation in the uncertainties of the 206/238 and 207/235 ratios, calculated for each analysis individually from scatter in the corrected ratios.

Entire ablation55.10.512038 ± 0.0094140.066490 ± 0.0004890.172415.6 ± 2.8  
 52.60.519726 ± 0.0087370.065955 ± 0.0005380.227414.0 ± 3.1  
 54.40.507400 ± 0.0087480.068156 ± 0.0005240.274423.8 ± 3.0  
 53.80.508554 ± 0.0084380.066742 ± 0.0005610.198416.7 ± 3.1  
 54.00.515752 ± 0.0066800.067332 ± 0.0004520.236420.5 ± 2.5  
 54.90.511395 ± 0.0090010.066611 ± 0.0005690.191416.4 ± 3.2  
 53.60.511368 ± 0.0069740.067273 ± 0.0005370.212419.6 ± 2.9  
 51.00.510863 ± 0.0050510.067523 ± 0.0004340.437420.6 ± 2.5  
 51.00.508843 ± 0.0067430.067294 ± 0.0004510.283419.4 ± 2.6  
 52.60.511534 ± 0.0072040.067740 ± 0.0005600.304421.7 ± 3.1419.0 ± 2.1418.8
Missing first 10 s45.00.513802 ± 0.0107970.066812 ± 0.0005310.174417.4 ± 3.1  
 45.60.516595 ± 0.0094810.066239 ± 0.0005750.236415.0 ± 3.3  
 44.60.518087 ± 0.0077190.067780 ± 0.0004660.189423.0 ± 2.6  
 44.50.514820 ± 0.0102050.067044 ± 0.0005940.177418.9 ± 3.4  
 44.90.515292 ± 0.0081910.067705 ± 0.0005500.147422.2 ± 3.0  
 44.90.512003 ± 0.0098240.068763 ± 0.0005510.252427.6 ± 3.2  
 44.60.510237 ± 0.0093400.067171 ± 0.0005690.178419.0 ± 3.2  
 45.00.511093 ± 0.0079870.068218 ± 0.0005900.280423.9 ± 3.3  
 45.20.508725 ± 0.0071720.067227 ± 0.0004800.253419.1 ± 2.7  
 45.00.510646 ± 0.0054150.067617 ± 0.0004430.415421.0 ± 2.5420.8 ± 2.5420.7
Missing last 10 s45.10.509008 ± 0.0088520.066758 ± 0.0006440.213416.9 ± 3.5  
 45.20.510015 ± 0.0093210.068175 ± 0.0005840.221423.9 ± 3.3  
 42.60.519463 ± 0.0074000.068042 ± 0.0005670.230424.5 ± 3.1  
 45.00.515763 ± 0.0097710.066944 ± 0.0006350.208418.6 ± 3.6  
 45.10.515861 ± 0.0068000.068123 ± 0.0004700.223424.3 ± 2.6  
 45.00.524054 ± 0.0092550.067042 ± 0.0005680.218420.0 ± 3.2  
 45.00.513554 ± 0.0102260.066979 ± 0.0005430.152418.3 ± 3.1  
 41.90.520149 ± 0.0076090.068552 ± 0.0006070.294426.8 ± 3.3  
 40.90.514107 ± 0.0073520.067794 ± 0.0004910.285422.5 ± 2.8  
 41.40.518296 ± 0.0051870.068141 ± 0.0004800.446424.6 ± 2.7422.3 ± 2.2422.1

[61] It is important to stress that while the exponential model proved the most appropriate to this data set generated in our laboratory, other laser ablation systems and/or analytical sessions may exhibit differing downhole behavior. Iolite provides the ability to choose the most appropriate downhole fractionation model for any given analytical session and indeed to quickly compare the results of imposing different models onto the same data set.

[62] In addition to choosing a downhole fractionation model that accurately fits the average of standard analyses, it is important that the fractionation pattern does not vary within a session. Figure A3 provides a view of uncorrected 206Pb/238U ratios for each analysis of the 91500 analysis throughout the period of this test (∼2 h), and demonstrates that the fractionation pattern did not change detectably with time. Figure A4 contains 206Pb/238U ratios corrected using the exponential curve fit, plotted in a manner similar to Figure A3. Each analysis does not change with ablation time (x axis), and there is no change in the analyses throughout the session, indicating that there has been no drift in the downhole fractionation of the Temora zircon, and that the exponential model has appropriately corrected all analyses.

Figure A3.

Stacked view of the uncorrected 206Pb/238U ratios for all 91500 zircon analyses over a period of ∼2 h. Colors change from orange for the beginning of the session (top of the image) to blue for the end of the session. The first analysis is placed below the last to test whether any gradual change in pattern occurred during the session. All analyses are scaled identically, and the data are identical to Figure A2, including the average wave (black) and fitted exponential curve (red).

Figure A4.

Stacked view of 206Pb/238U ratios of the Temora zircon standard, corrected for downhole fractionation using the exponential model in Figure A2, over a period of ∼2 h. Colors change from pink for the beginning of the session (top of the image), to blue for the end of the session. All analyses are scaled identically, and a flat line (black) is included for reference.

Appendix B:: A Description of Iolite's U-Th-Pb Data Reduction Scheme

B1. Overview

[63] Appendix B has been written to provide the user with some insight into how Iolite's “U_Pb_Geochronology” data reduction scheme (DRS) functions. For tutorials covering the use of both Iolite in general and the U_Pb_Geochronology DRS, we refer the reader to the Iolite website (

[64] Application of the DRS consists of the following steps (in order): (1) selection of baselines; (2) calculation of baseline-subtracted beam intensities, raw elemental and isotopic ratios, and indicative raw ages; (3) selection of reference standard analyses; (4) interactive modeling of downhole elemental fractionation for each elemental ratio (e.g., 206Pb/238U); (5) calculation of downhole fractionation corrected ratios; (6) estimation of instrumental drift using reference standard analyses; (7) calculation of final drift-corrected elemental and isotopic ratios; (8) selection of optimal regions of the sample analyses for export; and (9) export of final values (this step includes propagation of uncertainties and calculation of error correlations). (Data are calculated progressively, with the results of each step easily available for viewing by the user at all stages.)

[65] Following this overview, Appendix B is subdivided into two more sections. In section B2, the above processes are described in more detail under the divisions of “general Iolite features,” which covers those parts of data reduction common to any DRS in Iolite, and the additional data reduction operations specific to the U_Pb_Geochronology DRS. Section B3 provides a broad description of the programming code that should allow the interested reader to better understand the functions used by the DRS.

[66] To summarize briefly, Iolite was developed specifically to treat laser ablation ICP-MS data, although it is also proving to be extremely useful for manipulation of conventional solution ICP-MS and TIMS data. The software is unique in being both powerful (a 5 h U-Th-Pb session could easily involve >160,000 data points for each mass measured) and extremely flexible. Although the Iolite platform itself is encrypted, all data reduction modules are open source, and wherever possible programming of the underlying platform has been conducted with flexibility in mind.

[67] Iolite allows the user to view all data relative to time, and this has several important implications. First, it is possible to work with many separate files, and with data that contains any mixture of baselines, standards and unknowns, without requiring fixed timing or sample spacing. Second, all stages of data reduction can be viewed against time, meaning that the user is free to use raw beam intensities, uncorrected ratios, final corrected ratios, or any mixture of these, when selecting sample/standard intervals. Third, any interpolation, for example of instrumental drift with time, takes into account the relative timing of analyses, and enables the use of complex splines to model these changes.

[68] It is also worth noting that Iolite has been designed to work with an entirely generic data format that will accommodate data from any instrument (or even two instruments simultaneously).

B2. Details of the Data Reduction Scheme

[69] The U_Pb_Geochronology DRS incorporates of a number of discrete stages of data processing. Some of these stages are common to any DRS in Iolite, but others are unique to the U_Pb_Geochronology DRS. A description of those stages common to any Iolite DRS is provided first, followed by an in-depth treatment of the features unique to the U-Th-Pb data reduction, namely, the modeling and correction of downhole elemental fractionation, the propagation of uncertainties, and the calculation of error correlations.

B2.1. General Iolite Features

[70] For a more thorough explanation of Iolite's features we refer the reader to the Iolite website (, and to the Iolite manual that can be found there. The information provided below is current at the time of publication, but may become outdated as Iolite evolves.

B2.1.1. Baseline Subtraction

[71] As in any mass spectrometric analysis, it is important to ascertain the level of background noise in any signal. This baseline level can then be subtracted from the total signal to calculate a baseline-subtracted intensity that in this case, represents only the material sampled by the laser. It is common practice in laser ablation studies to analyze a “gas blank,” in which all gas flows and instrumental parameters are identical to those encountered during sampling, but the laser beam is either turned off, or is physically blocked by a “shutter.”

[72] In Iolite, raw signal intensities can be viewed individually or together, and scaled in such a way that the user can easily assess the background intensities of many beams simultaneously. The time scale can be adjusted to view large periods (e.g., to view drift in background levels throughout an entire session), down to very small details (e.g., to examine an individual period of baseline acquisition), or anywhere in between. Using this information, the user then selects periods of time containing suitable baseline data, which will then be used by Iolite in calculating baseline subtracted beam intensities. If at any stage the user wishes to add, remove, or modify baselines, this can be done quickly and easily, with any changes reflected in the recalculated data.

[73] In order to subtract baseline intensities from sample and standard analyses, values need to be interpolated between the periods of baseline data. In other data reduction methods, such interpolations often involve a simple linear interpolation of the baseline data immediately adjacent to each sample/standard analysis, or the averaging of a large group of baseline analyses. However, because Iolite works with the time of acquisition of each data point, it is capable of more powerful interpolative methods, such as smoothed cubic splines. These splines can be adjusted to fit more or less strictly through each baseline analysis, and can give varying degrees of weighting to individual baseline analyses, based on the calculated uncertainties of each time period. This means that a few seconds of baseline data will be given less weight than several minutes of high-quality baseline. The resulting interpolated spline can be viewed for each mass analyzed, and any changes to the period of an individual baseline, or to spline parameters (e.g., the degree of smoothing) are instantly displayed. In addition to smoothing splines, a number of different forms of interpolation are available to the user, including more conventional methods.

[74] An individual baseline spline, spanning the duration of all data being reduced, is calculated for every mass measured. Baseline-subtracted beam intensities are calculated by subtracting the baseline spline from the raw beam intensity at each data point.

[75] Iolite has a number of options available to use in the calculation of statistics for baseline data, including various forms of outlier rejection and a choice between using “mean” or “median” based approaches. By default, baseline statistics are calculated as the mean of the data, with no outlier rejection. Although outlier rejection is often useful when processing laser ablation data, it can cause significant problems when applied to low-level baselines. This is because, for typical dwell times, a single count will translate into tens of counts per second (for example, a 30 ms dwell time will extrapolate a single count to 33 counts per second). For low background levels (e.g., several counts per second), this effect results in beam intensities consisting predominantly of 0 counts per second, punctuated rarely by counts in multiples of the minimum detection level (33, 67 or 100 counts per second in this case). Statistical calculations using outlier rejection often reject the rare higher values, producing an average lower than the true background level, and an extremely low calculated uncertainty. In contrast, a simple average of all points will (with increasing amounts of data) approach the true value, and should provide a more realistic estimate of uncertainty.

B2.1.2. Modeling of Long-Term Drift

[76] In Iolite, long-term drift in isotopic or elemental ratios is typically corrected by normalization to reference standards of known composition (i.e., sample standard bracketing). In the U-Th-Pb DRS, a reference zircon of known composition is ablated under the same conditions (e.g., spot size, laser repetition rate, preablation) as unknowns, and normalization is performed on all ratios after the correction of downhole fractionation.

[77] There is no prerequisite on the number of reference standard analyses, or their spacing, but in the U-Th-Pb DRS “penalty” factors are propagated into the uncertainty of all analyses if less than 6 standard analyses are available for constructing the downhole calibration curves. As with baseline subtraction, Iolite has the option of using a variety of splines to interpolate normalization factors from reference standard analyses through unknowns. These will be most effective if reference standards are interspersed regularly throughout the analytical session.

[78] By default, a mean with 2 standard deviation outlier rejection is used when calculating statistics for all analyses of reference standards and unknowns.

B2.1.3. Export of Data

[79] After data processing is complete, the user can export a table of results from Iolite. The results table is in tab-separated format, and can be directly imported into Microsoft Excel. By default, the U-Th-Pb DRS exports data in a format suitable for input into Isoplot using either “Normal” or “Inverse” U-Pb isochrons, although the content of the export table is fully customizable. In addition to a simple table of statistics, it is also possible to export a time series of calculated data, either at the original time spacing of the data, or after down-sampling.

B2.2. Modeling and Correction of Downhole Elemental Fractionation

[80] An integral feature of the U-Th-Pb DRS is its treatment of downhole elemental fractionation. This feature has been specifically developed for this data reduction scheme, and employs separate windows that allow interactive modeling of the pattern of downhole fractionation for each elemental ratio.

[81] The modeling and correction of downhole elemental fractionation occurs after baseline subtraction of beam intensities, but before the correction of instrumental drift. By using data that is not drift-corrected some variability is often present between reference standard analyses. However, this variability is seen as a parallel offset of the data, and has no effect on the pattern of downhole fractionation in each analysis, or in the modeling and correction of this effect. It is worth noting here that in treating downhole fractionation the number of seconds since the laser began firing for a particular spot analysis (referred to here as “ablation time”) is used as a proxy for hole depth, which cannot be measured directly. The beginning of ablation is detected using the rate of change in an “index” beam intensity (the 238U beam by default), this approach is highly reproducible, and does not require any extra information from the mass spectrometer or laser software.

[82] To model downhole fractionation within an analytical session, the data from individual analyses of the reference standard are combined to generate an average pattern of changes in the elemental ratio with ablation time. This averaging generates a more representative pattern, and reduces the effects of signal noise. There is no requirement for the selected segments of reference standard analyses to be the same length, or to be continuous. This means that the user is free to avoid sections of individual analyses that are clearly inaccurate (as may result from, for example, surface Pb contamination, cracks in the grain, or the laser drilling all the way through a thin area of the grain). For each time slice of the average pattern an uncertainty is also calculated; this uncertainty is then incorporated when calculating each model of downhole fractionation (with the exception of the running median).

[83] In order to remain as flexible as possible, a number of different model types are provided, these include (1) linear, (2) exponential (illustrated in Figure B1), (3) double exponential, (4) combined linear and exponential, (5) smoothed cubic spline, and (6) running median. The DRS has been designed so that the user can freely switch between these different models, and quickly see the effect of each on downhole corrected data.

Figure B1.

Example of the interactive window for modeling of downhole fractionation, in this case using an exponential equation (shown in blue in the top right of the window). The controls on the right of the window are described individually in section B2.2. The large graph illustrates all individual analyses of the reference standard (colored from blue to red) plotted against ablation time on the x axis. The black line is the calculated average of these analyses, and the red curve is the exponential equation that best models changes in the average with ablation time. Beneath this, the residuals of the fit (red) are displayed. These are calculated by subtracting the exponential curve from the average value for each data point; in this case the residuals are evenly distributed and exhibit no obvious bias, indicating that the curve is an appropriate estimation of fractionation. The “quality of fit” plot in the bottom right corner of the window provides additional feedback to the user on the quality of their model of downhole fractionation, again based on the (black) average values.

[84] Of the above models, the first four employ a simple mathematical equation to calculate drift in the relevant elemental ratio (y axis) relative to ablation time (x axis). In each case, the equation employed is provided at the top of the curve fit window (see Figure A1 for an example). The last two models employ Igor Pro's “smooth” and “interpolate2” functions, using a degree of smoothing controlled by the user.

[85] The running median calculates the median for a given data point by calculating the median of all points within a window “n” seconds wide, centered on that data point (where “n” is the smoothing parameter specified by the user).

[86] The smoothed cubic spline uses the method of Reinsch [1967] to fit a smoothed cubic spline through all data points, taking into account the individual uncertainties of each. The smoothing factor specified by the user determines how tightly the curve is fitted through each data point, no smoothing will result in a cubic spline fitted through all data points, and as the factor is increased the spline will become smoother, and eventually approach a straight line.

[87] When the curve fit window first appears, the default settings (altered in the “Edit settings” window) are used to calculate a model of downhole fractionation for the ratio, based on the average of all selected analyses of the reference standard. The user is then free to alter the model using the controls on the right-hand side of the curve fit window (Figure B1).

[88] These controls operate in two separate ways, either by masking the beginning and/or end of the average, or by manually altering the parameters of the fit (note that the latter option is unavailable when using “Running median” or “Smoothed cubic spline”).

[89] Masking of the beginning or end of the average pattern (the gray regions in Figure B1 indicate masked areas) is particularly useful, as there will often be small sections at the start and end of the graph that are averages of only one or two waves, and are thus highly susceptible to the signal noise of these analyses. By masking these portions so that they are not included in model calculation a significantly better fit to the data is often achieved.

[90] By clicking the “Manual” button, the user is able to deactivate automatic calculation and manually edit all of the model parameters (Note that this is not possible for the Running median or Smoothing spline methods). In this way, the user has complete control over the form of the fit, while still having the benefit of the measures of success of the model (the residuals plot and the “Quality of fit” section of the window).

[91] The “Quality of fit” window provides additional information to the user on the efficacy of the chosen model of fractionation. The standard error provides an indication of the scatter of the average values after correction using the model, and should decrease as the quality of the model increases. The “Bias of fit” provides an indication of whether the model biases the data toward a higher or lower ratio. If an appropriate model is chosen, this value should be near zero. Finally, all points of the average data are plotted as a histogram. Given that normal statistical methods are employed, it is important that the corrected data have a normal distribution, and the purpose of the histogram is to assess whether this assumption is valid. If data are badly skewed, bimodal, or otherwise differ from a normal “bell curve” distribution, it would suggest that the fractionation model is inappropriate. In addition, it would indicate that the user should consider using a more flexible method of statistics for their data (in some cases it may be sufficient to use Iolite's median-based statistical methods). Note that an ideal bell curve can be toggled on or off behind the histogram to assist in comparisons.

B2.3. Propagation of Uncertainties

[92] In addition to the internal precision of individual analyses a number of other sources of error exist, and the DRS attempts to account for these during export. Although a number of studies have attempted to quantify individual sources of uncertainty and propagate these appropriately, the use of complex downhole fractionation models in Iolite's U-Th-Pb DRS makes this approach difficult to implement. In addition, this approach requires that all sources of uncertainty are explicitly identified and it remains clear from many laser studies that this is not always the case.

[93] Iolite's U-Th-Pb DRS employs an approach that requires no a priori knowledge of the source of uncertainties, instead using analyses of the reference standard as “pseudosecondary standards” by removing them individually from the data set, and reprocessing the data. Although this approach is extremely computationally intensive, it can be used in combination with complex downhole fractionation models, and has the added benefit of inherently including unidentified sources of error. However, because the method estimates uncertainties based on the reference standard, it is important to understand that the relative contribution of different sources (such as baseline noise) to unknowns and the reference standard may differ with factors such as U concentration or age. It is therefore useful, as always, to use a reference standard of a similar composition and age to unknowns, and to use secondary standards to assess accuracy and precision. The procedure employed in propagating uncertainties is described in detail in the manuscript, and is not repeated here. For more detail on the underlying programming code, please refer to section B3.

B2.4. Calculation of Error Correlations

[94] The calculation of error correlations in Iolite's U-Th-Pb DRS is simple, but is also arguably the most accurate method, as it employs all available information (in contrast, for example, to some arithmetic approximations). To calculate the correlation in the uncertainty of two ratios (e.g., 206Pb/238U versus 207Pb/235U), a built in Igor Pro function called “StatsCorrelation” is employed. The function takes all data points within the relevant time period (e.g., an analysis of a sample unknown) and tests whether any correlation exists in the variation of the ratios. If the data points are visualized as an X-Y diagram of the two ratios of interest, the function is testing whether the distribution of the data is random (this would appear as a “shotgun” plot, with the data points scattered evenly in all directions), or whether the data cloud forms an ellipse (indicating correlation in the scatter of the two ratios). The ellipse will become more elongate as the degree of correlation between the ratios increases, and will slope diagonally upward if the correction is positive (as is normally the case), or diagonally downward if the correlation is negative.

B3. Summary of the Programming Code

[95] The following description is intended to assist the avid reader in understanding the U-Th-Pb DRS. The entire programming code is easily accessible from within Iolite, simply select “Edit the active DRS” from the “Iolite” menu after selecting the U-Pb method in the main control window. Igor Pro colors the programming text to enhance readability, right clicking on any function will also allow the user to view associated help files (for built in functions), or to skip to a user function. Note however that the underlying programming code of the Iolite platform is encrypted, as such some of the functions used in the DRS are not available for viewing by the user.

[96] The below description refers to the current copy of the “U_Pb_Geochronology” DRS programming code. Please bear in mind that it is likely to evolve with time, so the attached notes should be used only as a guide. To make use of the page number references, simply copy and paste the programming code from Iolite into a blank Microsoft Word document.

B3.1. Page 1: Top

[97] The first half of this page contains features that are common to all Iolite DRSs, including definition and default settings of parameters to be editable within the “Edit settings” window, and a version number for the DRS.

B3.2. Page 1: “Function RunActiveDRS()”

[98] This is the beginning of the DRS. References are made to the global variables and strings defined in the header of the file. The curve fitting names provided in the user interface are shortened for convenience.

B3.2.1. Page 1: “//Do we have a baseline_1 spline…”

[99] The DRS checks if a baseline spline exists for the index channel (this can be set in the “Edit settings” window). If no baseline exists the execution of the DRS will halt, with a message to the user stating that no baseline spline was found.

[100] If a baseline is present, the DRS will reference global strings containing lists of inputs (this list is populated during import of data), intermediates (empty) and outputs (empty).

[101] The Index time wave is created. All intermediate and output waves will be interpolated onto this time wave, so that all data points can be compared sensibly (inputs may have different time spacing of data points, so cannot necessarily be compared directly). The number of points in this index wave is stored for use in the creation of all other waves from this point on in the DRS.

B3.2.2. Page 1: “//THIS DRS IS A SPECIAL CASE”

[102] In order to quickly reprocess data after curve fitting and during export, the DRS is divided into sections using an “If” function: this allows early portions of the calculation (everything up to and including curve fitting) to be skipped if not required (i.e., if none of the parameters affecting them have changed. The global variable “OptionalPartialCrunch” is used as a flag to turn on or off this feature.

[103] If a full data calculation is selected (this is the default case), baseline subtraction and interpolation of input waves onto the index time wave is achieved using the “$InterpOntoIndexTimeAndBLSub(IndexChannel)” function. This is first performed separately on the index wave (selected in the “Edit settings” window), then on each of the available input waves using a loop that steps through each item in the list of inputs. In each case the generated intermediate wave is also added to the list of intermediate channels so that it can be viewed by the user.

[104] This is followed by the generation of a mask wave based on a threshold in CPS for the index channel. The mask is given a value of 1 where the index channel is above this threshold and NaN (not a number, this is the equivalent of an empty cell in a spreadsheet) if below. By multiplying waves by this mask any time intervals where the index channel is below the set threshold are masked (this is particularly useful when viewing ratios that tend to become very noisy at low signal intensities).

B3.2.3. Page 2: “Wave Pb206_CPS = $ioliteDFpath(“CurrentDRS”,“Pb206_CPS”)”

[105] Before proceeding with any U-Th-Pb related calculations a check is performed to determine whether all required masses are present. This is achieved by referencing the required waves, then checking that the reference is valid. Because some mass spectrometers produce isotope formats (e.g., Pb206) and others produce a simple mass format (e.g. m206) an “If” statement is used to check whether channels are present in either format. In addition, a check on whether the optional 204Pb channel is present is conducted. If present a flag is set that is used in the rest of the DRS to calculate 204Pb related ratios.

B3.2.4. Page 2: “Wave Raw_206_238= $MakeioliteWave(“CurrentDRS….”

[106] Intermediate waves that will contain raw elemental and isotopic ratios are constructed using the Iolite function “MakeioliteWave” and using the number of points in the index time wave.

B3.2.5. Page 2: “Raw_206_238 = Pb206_CPS/U238_CPS * MaskLowCPSBeam”

[107] The raw ratios are calculated, and multiplied by the mask wave described above. Note that 235U is calculated from the 238U beam using a ratio of 137.88 Simple age estimates are also generated based on each elemental ratio. Each of the intermediate waves is then added to the list of intermediates for viewing by the user. Finally, raw 204Pb related ratios are generated if 204Pb is present.

B3.2.6. Page 2: “wave BeamSeconds= $DRS_MakeBeamSecondsWave…”

[108] A built in function is used to determine each opening of the laser shutter, based on the rate of change of the index channel intensity. The function generates a wave that lists, for each point, the number of seconds since the last shutter open event. This wave is used in downhole related calculations.

B3.2.7. Page 2: “DRSAbortIfNotWave(ioliteDFpath…”

[109] It is checked whether any time periods of the reference standard have been selected. If not, the DRS does not proceed further.

B3.2.8. Page 3: “DownHoleCurveFit(“Raw_208_232”…”

[110] This uses a separate DRS function called “DownHoleCurveFit” to commence the process of interactive downhole fractionation modeling by the user. The function is described in detail below, and works with a single raw ratio, seen here in green.

B3.2.9. Page 3: “ListOfIntermediateChannels+= “DC206_238;”

[111] Because of the ability for the DRS to skip segments of the data calculation any waves generated beyond this point need to be added to the lists of intermediates and/or outputs prior to the following “else” statement. If relevant, 204Pb related ratios are also added.

B3.2.10. Page 3: “//THIS IS A BIG ELSE…”

[112] The “else” statement references waves so that they can be used later in the function. The remainder of this function after “endif” is executed whether a full or partial data crunch is used.

B3.2.11. Page 3: “OptionalPartialCrunch = 0”

[113] The optional partial crunch is set back to the default (a full calculation).

B3.2.12. Page 3: “Wave DC206_238= $MakeioliteWave…”

[114] Each of the waves that will contain downhole corrected values is created.

B3.2.13. Page 3: “Wave DC206_238= $MakeioliteWave…”

[115] Each of the waves that will contain downhole corrected values is created.

B3.2.14. Page 3: “strswitch(ShortCurveFitType)”

[116] This string switch references the waves generated in the “DownHoleCurveFit” function for each ratio in turn (their names are dependent on the type of fractionation model used). In each case, the downhole correction is also carried out, using an equation identical to that displayed in the fit window, or in the case of the running median and smoothed cubic spline, using the spline itself. All calculations are relative to “beam seconds,” which is a measure of the number of seconds since the laser shutter last opened, and is a proxy for hole depth.

B3.2.15. Page 4: “DCAge207_235 = Ln((DC207…”

[117] The calculated downhole fractionation corrected elemental ratios are used to generate downhole corrected age estimates.

B3.2.16. Page 4: “Wave DC207_206= $MakeioliteWave…”

[118] Waves for each of the Pb/Pb isotope ratios are generated. No downhole correction is conducted, so the waves are made equal to the raw ratios. If necessary, 204Pb related ratios are also calculated.

B3.2.17. Page 4: “Wave Final206_238= $MakeioliteWave…”

[119] Waves to hold drift-corrected ratios are created. After the creation of these waves the DRS function “DriftCorrectRatios()” (described below) is used to calculate drift corrected ratios based on integrations of the reference standard.

B3.3. Page 5: “Function DriftCorrectRatio(…”

[120] This function generates ratios corrected for instrumental drift. It is called by the DRS function, but is also called during export by the error propagation function. Note that the function is capable of working with a subset of the data, a feature that is used during error propagation.

B3.3.1. Page 5: “//the next 5 lines reference…”

[121] The global settings at the top of the file are referenced here so that they can be used in the function.

B3.3.2. Page 5: “Wave DC206_238= $ioliteDFpath(…”

[122] Waves generated previously by the DRS function are referenced so that they can be used in the function. If 204Pb waves are present these are also referenced.

B3.3.3. Page 5: “string ListOfSplinesForRecalc…”

[123] Before proceeding, it is necessary to make sure that all splines required by this function exist and are up to date. To do this a string containing a list of the required splines is created, and the “RecalculateIntegrations” function is passed this string to conduct the update.

B3.3.4. Page 5: “wave DC206_238_Spline = $InterpSplineOntoIndexTime…”

[124] The splines, which may have anything from 2 to 200,000 points depending on the spline type used, are interpolated onto the index time wave. After doing this, each data point of the spline will directly correspond to the same data points in all other intermediate or output waves. The Iolite function “InterpSplineOntoIndexTime” is used to do this.

B3.3.5. Page 5: “variable StdValue_206_238 = GetValueFromStandard…”

[125] The “true” ratio of the chosen reference standard is extracted from the reference file, using the Iolite function “GetValueFromStandard,” for each of the ratios requiring drift correction.

B3.3.6. Page 5: “Final206_238[OptionalMinPoint, OptionalMaxPoint] = DC206_238…”

[126] Final corrected ratios are generated by normalizing the downhole corrected ratio to the reference standard using the reference standard spline for each ratio. This is followed by the calculation of indicative corrected ages for each elemental ratio. Finally, if 204Pb is present, drift corrected 204-related isotope ratios are generated in the same way.

B3.4. Page 6: “Function DownHoleCurveFit(”

[127] This function begins interactive modeling of downhole fractionation. It has the capability of staggering each window, using an optional window number. It is passed a specific raw ratio, in this case an elemental ratio such as 206Pb/238U.

B3.4.1. Page 6: “//the next 5 lines reference all of the…”

[128] As with other functions, these lines reference the global variables and strings at the header of the file, so that they can be used in the function.

B3.4.2. Page 6: “string ShortCurveFitType…”

[129] As in the main DRS function, the long descriptive names of fractionation models for the user interface are shortened.

B3.4.3. Page 6: “String WindowName…”

[130] A check is performed to see if the window already exists. If it does, it is deleted.

B3.4.4. Page 6: “Wave/z Index_Time…”

[131] The index time wave and index channel wave are referenced, and a check is conducted to see that they both exist. If either is missing an error is reported and DRS execution ends. Similarly, the reference standard matrix and beam seconds waves are referenced. A variable containing the number of points in the index time wave is created; this is used in the creation of any new waves.

B3.4.5. Page 6: “variable thisinteg=0…”

[132] Variables and strings required for a “do – while” loop are defined. The loop cycles through each integration of the reference standard, creating waves for each integration containing the relevant ratio, and associated beam seconds value (time since shutter open). Each of these ratio waves are smoothed slightly to remove outliers, which are replaced with a 9-point running median value.

B3.4.6. Page 7: “//special case of first waves…”

[133] To create the interactive window, the first waves created by the above loop need to be specifically referenced.

B3.4.7. Page 7: “Variable WindowLeft, WindowRight…”

[134] Parameters for the size and position of the window are defined. A conversion between the different display coordinates of Mac and Windows operating systems is also performed.

B3.4.8. Page 7: “Display/W=(WindowLeft…”

[135] The curve fit window is created, and waves are appended to it.

B3.4.9. Page 7: “variable LowestBeamSec…”

[136] Parameters for a loop that runs through each integration of the standard to determine the smallest and largest beam seconds values that are recorded; these limits are used when creating an average wave to ensure that it spans the range of all integrations. The loop also appends each wave in turn to the window graph.

B3.4.10. Page 7: “variable TotalPointsInBeamSecs…”

[137] The total required points and time spacing of the average wave are determined.

B3.4.11. Page 7: “string AverageName = …”

[138] A number of strings are created for use in the below loop. The loop cycles through each beam seconds interval of the average wave, and for each point uses an inner loop to create an average of all of the individual integrations that are present. The loop is flexible, so can accommodate the (very common) case where the user selects slightly different lengths of each analysis of the reference standard, or avoids spurious sections of the analysis. The built in Igor Pro function “Wavestats” is used to calculate the mean and standard deviation of each time interval. If the uncertainty fails to calculate and returns NaN, an uncertainty equal to the average is used.

[139] By the time the loop is finished, an average wave spanning all individual integrations of the reference standard has been populated with the average of all individual analyses for each point in beam seconds. It is this wave that is used in curve fitting, calculation of residuals, etc.

B3.4.12. Page 8: “AppendToGraph/W=$WindowName…”

[140] The average wave is appended to the graph.

B3.4.13. Page 8: “string HistogramWaveName…”

[141] Waves to be used in evaluation of the quality of the fit are created, with names specific to the ratio being fitted.

B3.4.14. Page 8: “string AutoOrManual…”

[142] The curve fit has been made to allow manual or automatic modeling of fractionation. By default this option is set to automatic, but it can be changed to manual by the user.

[143] A test is done to see if this is the first time the curve fit has been conducted, and if so, a number of variables containing measures of the quality of the fit and other parameters are created. Parameters for the position of controls in the window are also defined.

B3.4.15. Page 8: “strswitch(ShortCurveFitType)…”

[144] Up to this point, everything in the function has been universal, but from now on the action of the function depends on whether manual or automatic curve fitting is selected, and on the type of curve fitting chosen.

[145] A strswitch function based on the fractionation model chosen is used.

[146] For each case most features are identical and consist of the following

[147] 1. If the fit has not been done before, global variables holding the parameters of the fit are created (e.g., “variable/g $ioliteDFpath(”CurrentDRS“,”LEVarA_“+ratio) ”).

[148] 2. The global variables are referenced.

[149] 3. A separate DRS function called “FitToAverageWave(ratio, ShortCurveFitType, AutoOrManual),” which is described below, is called to conduct the actual curve fit.

[150] 4. A number of additions and changes are made to the position and state of controls in the window. These are specific to each fit type.

B3.4.16. Page 11: “GroupBox Masking title=“masking…”

[151] Additional controls common to all model types are added to the window, these include the evaluation of the quality of the fit in the bottom right of the window, and the masking controls.

[152] Following this, the “MaskStartOrEnd(ratio, WindowName, ”StartMask,“ Start_MaskForFit)” DRS function (described below) is called, it draws gray boxes over the masked portions of the data and recalculates the curve fit if changes are made by the user.

[153] The remainder of the function appends the residuals wave to the bottom left of the graph, and makes a number of changes to the appearance of the window.

B3.5. Page 11: “Function FitToAverageWave(ratio…”

[154] This function automatically fits a model to the average wave (of all integrations of the reference standard) using the fractionation model selected by the user.

[155] The function begins by creating appropriate names and references for number of items based on the name of the ratio being used.

B3.5.1. Page 12: “StartPoint = BinarySearch…”

[156] The range of the average wave to be used in doing the modeling is bracketed by the start and end points here. These are calculated using the masking parameters set by the user.

B3.5.2. Page 12: “strswitch(ShortCurveFitType)…”

[157] This strswitch determines how the fitting is to be performed, based on the fractionation model chosen by the user. In each case, global variables holding the coefficients of the model are referenced. These globals are used as initial guesses for the model if possible, as well as being put into the window controls to display the values to the user, and finally they are used to correct downhole fractionation during data crunching.

B3.5.3. Page 12: “if(Variable_a == 0 …”

[158] This “if” statement tests whether previous values have been set for the global coefficient variables. If they have not then the model is fitted without using initial guesses. If they have been set (either by previous automatic modeling, or by manual adjustment by the user) then they are used as initial guesses for the curve fit. If the user is manually adjusting the model then no curve fit is done, but the coefficient variables are updated for use in the below step, and in correction of downhole fractionation.

B3.5.4. Page 12: “duplicate/O Average…”

[159] In order to display the fitted curve on the graph a duplicate of the average wave is made. The values of this wave are calculated using the coefficient values determined by either automatic or manual fitting. After the fitted curve is calculated, the residuals to the fit are calculated for each point by subtracting the fitted model from the average wave.

[160] Note that for the running median and smoothed cubic spline options coefficients are not used to calculate the fitted model. Instead, the original average wave is duplicated and smoothed/splined, the resulting wave is then used directly in correcting the effects of downhole fractionation. In addition, the smoothed cubic spline has been given the added functionality of being extended to higher and lower beam seconds values, meaning that it can potentially model downhole fractionation beyond the extent of the range of available reference standard data. Because all functions such as linear or exponential curve fitting have a simple equation, they can also be extrapolated to lower or higher values of beam seconds.

B3.5.5. Page 14: “duplicate/o Residuals, $IoliteDFPath …”

[161] The residuals wave is duplicated, and NaNs (NaN = not a number) are removed from the wave. The resulting wave can then be used to calculate the standard error and bias of the residuals to the fit.

B3.5.6. Page 14: “string HistogramWaveName…”

[162] Waves (with names specific to the ratio being fitted) are created for use in creating a histogram of the residuals. This histogram can then be used to visually assess whether the data are normally distributed.

B3.5.7. Page 14: “controlinfo/W = $WindowName FitGauss”

[163] This control toggles whether the user wishes to see an “ideal” bell curve for the residuals. This bell curve is plotted behind the data, and can be useful in determining whether the data are normally distributed. If the curve is selected the histogram wave is duplicated, and a Gaussian curve is fitted to the data.

B3.6. Page 15: “Function CheckboxFitUpdate(Controlstructure)”

[164] This function is related to the checkbox for “Show ideal gaussian curve.” It recomputes the curve fit and updates controls on the window to reflect the change.

B3.7. Page 15: “Function MaskStartOrEnd(ratio, WindowName, StartOrEnd, MaskValue)”

[165] This function is related to the “setvariable” controls that allow the user to choose how much data to mask at the start and end of the average wave. An “if” statement determines whether the control is related to the start or end mask and acts appropriately.

[166] In either case, the appropriate visible area on the graph to be masked is calculated, and a gray box is drawn (after first deleting any existing box already drawn).

B3.8. Page 16: “Function AutoManButton(buttonstructure)”

[167] This function is related to the auto and manual buttons that allow the user to choose between automatic or manual fitting of the curve to the average wave. The function activates or deactivates controls as appropriate for the selection and changes the color of the auto and manual buttons to reflect the change. Finally, the “FitToAverageWave” function is called using the new settings.

B3.9. Page 17: “Function DCFitWindowHook(infoStr)”

[168] This hook function detects when the window is deactivated and checks whether any changes to the fractionation model have been made. If changes have been made the data is recalculated using a “partial data crunch.”

B3.10. Page 17: “Function ResetFitWindows()”

[169] This function is not available via the normal interface. If typed into the command line it resets the fractionation modeling (for the chosen model type only). This can be useful in diagnosing problems, or in an experiment that has become unstable.

B3.11. Page 17: “Function/T Propagate_UPb_Errors()”

[170] This function is called during data export, it propagates errors for each of the output waves using the “pseudostandard bracketing” approach described in the manuscript. In short, the function generates a wave containing calculated excess uncertainties for each of the output waves, these are then combined in quadrature with the internal uncertainty of each spot analysis in the export function (called “ExportFromActiveDRS”).

B3.11.1. Page 17: “string currentdatafolder = …”

[171] As in other functions, these lines reference the global variables and strings in the header of the DRS file for use in this function.

B3.11.2. Page 17: “string BackupMatrixName…”

[172] The following lines reference then duplicate the reference standard matrix after creating names for the duplicates. These duplicated copies are used in the remainder of the function as working copies of the matrix that can be altered during calculations. Note that the matrices are killed if they exist, this is to avoid memory effects from previous executions of this function.

B3.11.3. Page 17: “variable NoOfStdIntegrationse…”

[173] The number of integration periods (rows) in the reference standard matrix is determined. This is used later in loops, and in making waves that require one point per integration period.

[174] The list of output channels is also referenced, and the number of output channels is determined. This information is used in the below loop to cycle through each of the outputs in turn.

[175] Following this, a wave is formed to hold the calculated 1 standard error uncertainty for each output channel. It is this wave that is used in the export function to combine internal and excess uncertainties (in quadrature).

B3.11.4. Page 18: “if(NoOfStdIntegrations<6)”

[176] This “If” statement is used to allow the function to add an additional factor to the calculated excess uncertainty if a small number of standard analyses are used. If less than six standard analyses are found the normal estimation method is not used, and a conservative excess uncertainty is applied to each output. These values are also printed to the history area.

[177] If less than 15 standard analyses are found then excess uncertainties are calculated normally, but they are multiplied by a factor, beginning at 2 (for 6 standard analyses) and decreasing to 1 (i.e., no change) for 15 or more standards.

B3.11.5. Page 18: “else//otherwise at least 6 standard…”

[178] Within this portion of the if-else-endif statement the excess uncertainty is estimated. Temporary waves are first made that will hold calculated values. Next, the index time wave is referenced so that the original data wave can be accessed. This is followed by the definition of variables required in the following loop.

[179] The loop cycles through each integration period of the reference standard matrix (i.e., each row of the matrix), removing each row in turn, and recalculating drift correction for only that time period using the remaining standards. At this point, no statistics are calculated, but the corrected ratio waves are altered for each integration period of the reference standard using the above independent drift correction.

[180] Note that baseline subtraction of beams does not need to be recalculated, as this is independent of the standards. Importantly, downhole fractionation also does not need to be recalculated, as it has been defined by the user, and cannot be automatically changed after the removal of a standard. Any variability in the accuracy of the downhole correction should be reflected by the corrected ratio of the standard, so this is acceptable.

B3.11.6. Page 18: “RecalculateIntegrations(“m_”+CalcdErrorsMatrixName…”

[181] Having now populated the data points of each integration period with independently drift corrected values (i.e., drift calculated without using that particular standard integration period) the Iolite function “RecalculateIntegrations” can be used to populate the “CalcdErrorsMatrix” with the statistics of each. This matrix now contains, in effect, a population of pseudosecondary standards that can be used to assess the degree of scatter that can be expected in the unknowns. To put it differently, the variability of this matrix includes all contributions of uncertainty to the analyses, and can thus be used to assess the total variability of the method. Given that the internal precision of all analyses are known, this information can then be used to calculate the excess uncertainty required to explain the total scatter of the data.

B3.11.7. Page 18: “wave AllStdInteg_Values = $MakeioliteWave…”

[182] Temporary waves are created that will hold intermediate values for the estimation of external uncertainties. Variables are made for use in the below loops.

[183] The outer loop steps through each output channel in turn, populates the temporary waves with appropriate results from the above loop, then initialize variables for the inner loop. The average value of all integration periods and the minimum internal error of the group are also calculated.

[184] The inner loop calculates the MSWD for the group of “pseudosecondary standards,” then iteratively adds an excess uncertainty to the internal error (in quadrature, using 1 standard error in each case) until the MSWD is within 0.2% of 1. The MSWD is then stored (after breaking the loop), after being normalized to the average. Note that it is necessary to convert the excess uncertainty to a relative number as it may be applied to ratios of very different magnitudes.

[185] After execution of the inner loop a “penalty factor” is applied if less than 15 standard integration periods were used, and the resulting calculated excess error is printed to the history window.

[186] Finally, a reference to the wave holding the results of this function is passed back to the calling function (the export function of the DRS).

B3.12. Page 19: “Function ErrorCorrelation(Output_DataTable”

[187] This function is provided two columns in the output table to perform error correlations on, it calculated an error correlation for each row, and places the result in the specified location, with the specified name. The function is general enough that it can be called easily for the calculation of error correlations. In the U-Th-Pb DRS it is used to calculate error correlations for the ratios most commonly used by Isoplot.

[188] The function first checks whether the provided columns exist in the output table, then defines variables for use in the below loop. It also inserts a new column in the output table, and gives it the label provided.

[189] The loop cycles through each row of the output table in turn. First, a check is performed to make sure that the rows both contain values. The start and end times are then extracted from the output table. These times are then converted into points in the relevant output wave, and the range bracketed by these start and end points is duplicated for each of the ratios to be correlated. These duplicated portions of the ratio waves, representing all data points within the integration period, are then used to calculate the correlation between the two ratios using the Igor Pro function “StatsCorrelation.” Because the ratios are assumed to be homogeneous over the integration period, any scatter in the data points is due to noise in the analysis. Thus, any correlation in this noise between the ratios indicates a correlation in their uncertainties.

B3.12.1. Page 19: “Function ExportFromActiveDRS…”

[190] This function is common to all DRS routines. It intercepts the export of data by Iolite, giving the DRS an opportunity to alter portions of the export data table before it is saved. In the U-Th-Pb DRS it makes two alterations to the data. First, it calls the function to calculate excess uncertainties (“$Propagate_UPb_Errors()”), then propagates these with internal errors for each integration period. It then calls “ErrorCorrelation(…” to insert error correlations into the output data table. It also generates inverted ratios for use in Isoplot.

B3.12.2. Page 20: “wave ExcessErrorsWave = …”

[191] The function for estimation of excess errors is called. Following this, a number of variables are defined for use in the below loops.

[192] The outer loop cycles through each output channel in turn, and combines the internal errors with the calculated excess external errors in quadrature.

[193] The inner loop cycles through each integration period (row) in turn, calculating the errors in quadrature. This loop is required so that formatted text with more than five significant figures can be used. Note that 1 and not 2 standard errors are used when combining uncertainties in quadrature. These are expanded back to 2 standard errors when placed in the output table.

B3.12.3. Page 20: “variable ColumnBeforeInsert…”

[194] A column is inserted to hold the inverted 206/238 ratio for use in the inverse format of Isoplot. Because of the limitation of Igor Pro's num2str function (which limits precision to 5 decimal places), a loop is used to invert each ratio and convert its uncertainty, then place these values back into the output table using “sprintf.”

B3.12.4. Page 20: “duplicate/O $ioliteDFpath…”

[195] To calculate the error correlation using the inverse ratio an appropriate output wave needs to be made. This is done here.

B3.12.5. Page 20: “ErrorCorrelation(Output_DataTable,…”

[196] The error correlation function is called to calculate error correlations for the two pairs of ratios commonly used by Isoplot. The function also inserts the results into the output table.

B3.13. Page 20: “Function AutoBaselines(buttonstructure)”

[197] This is a function that can be used in any Iolite DRS. It allows the user to click the orange buttons in the top left of the Traces Window. These buttons set up waves to be viewed and their scaling as specified in this function. This function is for the baselines button, and has been configured for viewing the most commonly required baselines for U-Th-Pb analyses, using scaling appropriate to the Varian. The user is free to change these default settings if desired.

B3.14. Page 21: “Function AutoIntermediates(buttonstructure)”

[198] This function is nearly identical to the above, but is designed for viewing raw ratios of the 91500 zircon standard. Its default settings are also completely editable by the user.


[199] We gratefully acknowledge Balz Kamber and Sebastian Meffre for the provision of U-Pb data from their laboratories at Laurentian University and CODES (University of Tasmania), respectively. We thank George Gehrels, Jan Kosler, Jeff Vervoort, and an anonymous reviewer for their feedback, which greatly improved the manuscript. Thanks also to Roger Powell for initial advice regarding error propagation.