Elemental fractionation effects during analysis are the most significant impediment to obtaining precise and accurate U-Pb ages by laser ablation ICPMS. Several methods have been proposed to minimize the degree of downhole fractionation, typically by rastering or limiting acquisition to relatively short intervals of time, but these compromise minimum target size or the temporal resolution of data. Alternatively, other methods have been developed which attempt to correct for the effects of downhole elemental fractionation. A common feature of all these techniques, however, is that they impose an expected model of elemental fractionation behavior; thus, any variance in actual fractionation response between laboratories, mineral types, or matrix types cannot be easily accommodated. Here we investigate an alternate approach that aims to reverse the problem by first observing the elemental fractionation response and then applying an appropriate (and often unique) model to the data. This approach has the versatility to treat data from any laboratory, regardless of the expression of downhole fractionation under any given set of analytical conditions. We demonstrate that the use of more complex models of elemental fractionation such as exponential curves and smoothed cubic splines can efficiently correct complex fractionation trends, allowing detection of spatial heterogeneities, while simultaneously maintaining data quality. We present a data reduction module for use with the Iolite software package that implements this methodology and which may provide the means for simpler interlaboratory comparisons and, perhaps most importantly, enable the rapid reduction of large quantities of data with maximum feedback to the user at each stage.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 U-(Th)-Pb zircon geochronology by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) has seen a dramatic rise in popularity in the last decade [Kosler and Sylvester, 2003]. While the precision of the method cannot rival that of the benchmark isotope dilution thermal ionization mass spectrometry (ID-TIMS), and its spatial resolution cannot easily compete with less destructive secondary ion mass spectrometry (SIMS) techniques, for many applications these drawbacks are outweighed by the low cost and rapid sample throughput achieved by LA-ICPMS, and many research facilities now have in-house laser ablation capabilities.
 A natural consequence of this rapid proliferation, combined with the relative immaturity of the technique, is an immense diversity in both the instrumentation and methodologies employed by individual facilities, producing significant differences in observed downhole elemental fractionation behavior—arguably the single largest influence on the accuracy and precision of the method [e.g., Horn et al., 2000; Jackson et al., 2004; Kosler et al., 2001]. This has resulted in a large number of in-house data reduction methods, with no firm consensus on how the correction for downhole fractionation should best be performed. There is also little transparency in other aspects of data reduction, notably propagation of uncertainties, making interlaboratory comparisons of data sets difficult.
 All LA-ICPMS systems inherently produce some degree of elemental- and/or mass-related discrimination during sampling. These so-called “instrumental” biases are generally observed to drift over the course of days or weeks, but may also vary detectably within a single analytical session. In addition to these relatively long term effects, elemental biases occur during laser ablation which vary on a short time scale as the ablation pit deepens. In U-Th-Pb dating, this “downhole fractionation” produces readily resolvable changes in measured Pb/U and Pb/Th ratios, and hence apparent age of the ablation target. Numerous studies have investigated the underlying causes of such behavior [Eggins et al., 1998; Hergenröder, 2006a, 2006b; Kosler et al., 2005], but it remains unclear which of many possible processes dominate, and how they vary with time. What is clear is that the interaction between these is complex, and that variables such as laser wavelength, the aspect ratio of the laser pit, choice of carrier gas, and gas flow create a myriad of potential trends in elemental fractionation.
 Although the use of rastering (which involves continuously traversing the laser across the sample surface) can effectively eliminate variations in elemental fractionation [e.g., Horstwood et al., 2003], this approach does limit the minimum possible target surface area (i.e., the 2-D spatial resolution) to several times larger than the laser spot diameter. Similarly, although fractionation can be minimized by employing short ablation times (e.g., 10 s), this approach limits the likelihood of detecting Pb loss, age zoning, etc., while also compromising data quality. For this reason, a number of researchers have proposed methods that attempt to correct for downhole fractionation effects.
 The most popular approach is the modeling of elemental fractionation using a linear regression, in which case the slope of the fractionation versus time trend is either calculated using an empirical relationship to spot size and hole depth [Horn et al., 2000], or the fitting of separate gradients to each sample analysis [Kosler et al., 2002a]. There are assumptions required in each of the above methods but the single largest drawback to their applicability is the fact that elemental fractionation is not always linear. As such, any application of the approach to nonlinear fractionation trends will produce inaccurate results. Although a correction method using the average of the standard over an interval identical to that of the sample can be employed [Jackson et al., 2004], this approach sacrifices temporal resolution (i.e., the time-resolved aspect of laser ablation data), making it more difficult to detect either regions of compromised data or true age zonation.
 This paper investigates new methods for the correction of laser-induced elemental fractionation, with the aim of establishing protocols that retain as much 2-D (ablated area) and depth (the time-resolved aspect of corrected data) resolution as possible, while still retaining sufficient flexibility to process data from any laboratory, exhibiting any degree of complexity. We also address the propagation of uncertainties during the correction procedure.
2. Analytical Methods
 All analyses were conducted at the School of Earth Sciences, University of Melbourne, employing a prototype of the Varian 810 quadrupole ICPMS coupled to a HelEx laser ablation system that utilizes a 193 nm ArF excimer laser. The laser was operated with an output energy of ∼70 mJ per pulse, providing an estimated power density on the sample of <5 J cm−2. A full list of instrumental parameters is included in Table 1. For full details of the Helex ablation system we refer the reader to Woodhead et al.  and Eggins et al. , although we would note here the following key characteristics of the system:
Table 1. Key Instrument Parameters and Operating Conditions for Laser Ablation U-(Th)-Pb Analysis
Helex ablation system
Lambda Physik Compex 110 ArF excimer
<5 J cm−2
19 to 71 μm
Helium gas flow rate
0.25 l min−1
Argon gas flow rate
1.06 l min−1
Effective cell volume
Varian 810 Prototype ICPMS
Sheath gas flow rate
0.26 l min−1
Auxiliary gas flow rate
1.85 l min−1
Cooling gas flow rate
17.5 l min−1
Points per peak
Total duration of one mass cycle
0.11 s (9 Hz)
 1. Although the laser cell is large in volume, ablation occurs within a nested microcell with a volume of ∼2 cm3, resulting in high temporal resolution with minimal memory effects (see Woodhead et al. [2007, Figure 1] for an illustration).
 2. The laser optics produce well-defined ablation pits with near-vertical walls [Eggins et al., 1998] and an even energy distribution across the spot.
 3. Ablation occurs in a stratified combination of helium beneath argon within the microcell (compared to mixing of argon and helium downstream of the cell in many other systems). The helium minimizes redeposition of ejecta/condensates, while argon provides efficient sample transport to the ICPMS.
 All data reduction was conducted off-line using the freely distributed Iolite data reduction package which runs within the Wavemetrics Igor Pro data analysis software; the reader is referred to Hellstrom et al.  and the Iolite website (http://iolite.earthsci.unimelb.edu.au/) for further details. Backgrounds were measured prior to each ablation with the laser shutter closed and employing identical settings and gas flows to those used during ablation. Data were acquired in batches approximately 1 h in duration, consisting of multiple groups of 5 to 15 sample unknowns bracketed by pairs or triplets of primary and secondary zircon standards. Background intensities were interpolated using a smoothed cubic spline, as were changes in instrumental bias (modeled using downhole fractionation corrected ratios of the zircon standard analyses). Elapsed time since the beginning of sample ablation was used as a proxy for hole depth, with laser-on events calculated by Iolite using an algorithm based on the rate of change in signal intensity, an approach that we have found to be highly reproducible. For further details of Iolite and the U-Th-Pb data reduction scheme we refer the reader to Hellstrom et al.  and Appendix B, respectively.
 For each selected time period (e.g., 50 s of data from a spot analysis) the mean and standard error of the measured ratios were calculated, using no outlier rejection for baselines, and a 2 standard error outlier rejection for all other data. All uncertainties are quoted at the 2 sigma level.
 All analyzed zircons were mounted in epoxy resin blocks and polished to a 1 μm finish. Each mount was cleaned ultrasonically in ultrapure water after polishing, then cleaned again prior to analysis using AR grade methanol. Prior to each individual analysis in any batch, regions of interest were preablated using several pulses of the laser (this equates to ∼0.2 μm in depth) to remove potential surface contamination, a method we have found to dramatically reduce common Pb contamination that would otherwise affect the first few seconds of analyses. 235U was calculated from 238U using a 238U/235U ratio of 137.88 [Jaffey et al., 1971].
 Analyses were routinely examined for spurious data caused by surface Pb contamination, cracks or fractures containing contaminants, areas of Pb loss, etc. This interrogation was achieved using a combination of baseline-subtracted intensities of individual isotopes, raw ratios and corrected ratios. Useful data could not be obtained for 204Pb due to large isobaric Hg interferences derived from the carrier gases.
3. Observations of Downhole Elemental Fractionation
 Downhole elemental fractionation is the change in the measured ratios between different elements, and occurs during laser ablation as the hole created by the laser deepens. In general, the signal intensities of refractory elements decrease more rapidly than volatile elements [Longerich et al., 1996], although changes with time can be complex, and other factors also influence fractionation [Eggins et al., 1998]. It has been observed that downhole fractionation does not correlate well with mass, and is better predicted by chemical characteristics such as whether an element is chalcophile or lithophile [Longerich et al., 1996].
 The expressions of the downhole fractionation effect vary with parameters such as laser wavelength [Jackson et al., 2004], spot size (Figure 1) [Horn et al., 2000], cell volume, gas flows, and choice of ablation gas (Figure 1) to name but a few. Nevertheless, it is reasonable to assume that the underlying processes contributing to the phenomenon are common to all systems, and that variables such as those noted above simply affect the degree to which each underlying process influences results, and the time during an analysis at which it is most influential. A large number of studies have examined these underlying causes [e.g., Eggins et al., 1998; Hergenröder, 2006a, 2006b; Kosler et al., 2002b, 2005; Kroslakova and Günther, 2007; Longerich et al., 1996], and we do not intend to reiterate them here. However, we would emphasize that these studies have universally demonstrated that downhole fractionation is the result of complex interactions between multiple processes. As such, any attempt to model or predict fractionation should be undertaken with caution, and the validity of any model employed should be carefully tested.
 Downhole fractionation typically causes an increase in observed Pb/U ratios with depth [Jackson et al., 2004; Kosler and Sylvester, 2003; Tiepolo, 2003], with minor changes in Pb/Th ratios. Although methods used to correct for these effects in U-(Th)-Pb geochronology typically assume that the changes with time are linear, this is not always the case. For example, Figure 1 illustrates that although 206Pb/238U can vary approximately linearly with time (i.e., depth) in some cases (Figures 1a and 1b), curved fractionation patterns often occur to varying degrees (Figures 1c and 1d). Indeed, such nonlinear trends are characteristic of our own analytical system, prompting this study. Furthermore, the steepness and absolute position of the data arrays can also vary depending on the spot size (e.g., Figure 1a) and the carrier gas employed (Figure 1b). Plots illustrating variations in fractionation pattern with spot size (Figure 2) on our own instrument not only further demonstrate that curved trends occur, but that the overall pattern can vary significantly, and is strongly influenced by the aspect ratio of the pit produced. All ratios show a rapid early increase, but the degree of subsequent curvature after ∼10 s results in an overall exponential pattern for 25 and 42 μm spot sizes, and actually produces a negative response in values of the 71 μm spot after ∼40 s. As noted above, the washout/response time of our ablation system is very rapid and the fact that we observe these phenomena so clearly has led us to believe that complex fractionation patterns of this type are the norm but may be masked to some extent in systems with slow ablation cell response times (perhaps producing pseudolinear trends).
 To understand the fractionation patterns of elemental ratios, it is worth first examining the (baseline subtracted) signal intensities used in ratio calculation. Figure 3 illustrates a combination of data from 6 ablations of the 91500 zircon standard, each of 60 s duration. The original data (pale colors) were combined to produce an average for each time segment (black line). Figures 3a–3c show the baseline subtracted beam intensities of U, Th and Pb and indicate that decay is not exponential (red line), as might be expected. Instead, more complex patterns are observed, and significant differences in response are apparent between elements. For each element, signals after ∼25 s appear to decay exponentially, but prior to this each maintains a steadier intensity than the exponential curve, suggesting the operation of multiple superimposed phenomena (e.g., gas flows, condensation conditions). This effect is most apparent for Pb and Th, which can deviate from the fitted exponential curve by over 10%, whereas the effect is more subtle for U (at least in our system). Correspondingly, the overall decrease in signal intensity is high for U (approximately 55%), in comparison to a ∼45% decrease for Th and Pb.
 Based on these observations, it is not surprising that the resulting 206Pb/238U ratio (Figure 3d) varies substantially, and in this case appears to closely follow an exponential curve (in red). The same test conducted with 5 ablations using a 71 μm spot (Figure 3f), however, shows a more complex response that cannot be modeled by a simple exponential curve. Interestingly, although signal intensities of both Th and Pb (Figures 3b and 3c, respectively) are relatively complex at 42 μm, the resulting 208Pb/232Th ratio (Figure 3e) is relatively stable over 60 s, decreasing steadily with time. This suggests that fractionation effects are well synchronized between these elements in both timing and magnitude, despite their complex.
4. Existing Methods for the Correction of Downhole Elemental Fractionation and Their Limitations
 Methods for the correction of downhole elemental fractionation can be subdivided according to whether standard sample bracketing is employed in modeling downhole fractionation patterns. Those that use standard sample bracketing assume that standards and unknowns will behave identically during ablation, and that the characteristics of downhole fractionation in the standard can be used to model its effects in unknowns, whereas other methods are independent of this assumption. Each of these categories is considered separately below. Note that this subdivision refers specifically to treatment of downhole fractionation, and not to the correction of long-term instrumental bias, which may also be corrected using standard sample bracketing.
4.1. Methods Without Standard Sample Bracketing
Horn et al.  detailed an empirical method for the correction of fractionation effects, based on the observation that elemental ratios have a linear relationship to hole depth (for a given laser spot diameter and energy density (Figure 1a)). Because these observations were highly reproducible using their system, they generated an empirical formula describing the relationship between pit diameter and the slope of the fractionation trend for a given energy density. Using this formula they individually corrected each time slice of the data for downhole fractionation (Figure 4a), then calculated the mean and standard deviation of the ratio from these corrected data points. The effects of instrumental bias were corrected separately using simultaneous nebulization of a Tl/U tracer solution. Although this approach has the potential to accurately correct downhole fractionation, it relies on the stability of fractionation patterns between analytical sessions, and thus requires highly reproducible operating conditions.
 To avoid these constraints, Kosler et al. [2002a] proposed a method that does not require all analyses to have the same fractionation pattern. Instead, the data for each laser pit are treated separately, and are corrected using a least squares linear fit (Figure 4b) of the elemental ratio over time. Using this approach, the derived y intercept and its uncertainty provide the corrected ratio and its precision, respectively. Like Horn et al. , instrumental bias was corrected separately via simultaneous nebulization of a Tl/U tracer solution, although other studies have demonstrated that instrumental bias can be accounted for by normalization to standard zircons analyzed in the same analytical session [Chang et al., 2006; Gehrels et al., 2008], or by a combination of these two methods [Klotzli et al., 2009]. The method allows for differences in elemental fractionation behavior between analytical sessions, or between individual analyses due, for example, to matrix-related effects. However, this flexibility also means that it may not distinguish between some cases of real sample variability (e.g., gradual transition into a growth zone of different age) and the effects of fractionation. In addition, the lack of time-resolved corrected ratios can make it difficult to detect the effects of fractures, inclusions, etc. on results.
4.2. Methods Employing Standard Sample Bracketing
 The most common method of standard sample bracketing is to correct the sample analysis using a corresponding time interval (for example, relative to “laser on”) from a neighboring standard analysis, or the pooled data of multiple standards [e.g., Jackson et al., 2004; Van Achterberg et al., 2001]. In this way, the effects of downhole fractionation are accounted for (Figure 4c), without any requirement to observe or model fractionation behavior. However, because this method takes the average ratio of a time interval in the standard(s) to correct unknowns, the method often reduces the temporal resolution of the data and thus also reduces feedback regarding the validity of the correction, lowering the probability of detecting heterogeneities such as growth zoning, fractures, or inclusions.
 A similar alternative is the “total counts” approach, in which all ions measured during an ablation are treated together [Johnston et al., 2009]. Although this approach, which was developed specifically for small-volume sampling, is capable of modeling nonlinear fractionation patterns it does assume that fractionation is identical in standards and unknowns, and cannot generate time-resolved corrected ratios. As such, it may be difficult to assess the validity of the correction used, or to detect downhole age variability, inclusions, etc.
 Each of the methods outlined above clearly have positive and negative aspects, but they generally rely upon linear variations in elemental ratios with time, which as illustrated above (Figures 2 and 3) may not always be the case. So how can we best retain spatial and temporal resolution, while accommodating more complex fractionation patterns?
 Philosophically, we believe this can best be achieved by moving away from the tendency of existing methods to rely on fitting data to a presupposed model of fractionation, and instead to first observe the effects of downhole fractionation, then fit an appropriate model to the data, whatever form it may take. To do this, it is not necessary to understand the causes of elemental fractionation, which are clearly both multiple and complex, but only to model their combined effects on elemental ratios. In taking this approach, we implement a standard sample bracketing methodology, using observations of the group of standard analyses within an analytical session to model the fractionation pattern in unknowns. Although this method is free from the extreme reliance on stability of the empirical approach of Horn et al. , it does require careful verification of the assumption that standards and unknowns behave identically. Evidence for the validity of this assumption is provided in Appendix A. We do not employ the simultaneous nebulization of a Tl/U tracer solution, and as such do not discriminate between instrumental bias generated within the mass spectrometer and the laser sampling system. We correct for instrumental bias by normalization to standard zircons analyzed in the same analytical session. Like Gehrels et al. , we use all standard analyses of the session to determine variations in the degree of instrumental bias. We model this variability using a smoothed cubic spline, although a number of other options are available in Iolite (e.g., linear interpolation, average of the session).
 As a starting point for this study, and as a result of our observations of curved fractionation trends under a range of different analytical conditions (Figures 1c, 1d, and 2d), we employ a model fitting an exponential curve (with the equation y = a + b.exp−cx) to the changes in elemental ratios with time. To evaluate this model we chose an analytical session of approximately 1 h duration, containing a number of 42 μm diameter spot analyses each of ∼50 s length. Nine analyses of the 91500 zircon standard spaced throughout the run were included for standard sample bracketing purposes. To test the efficacy of an exponential curve fit we combined all standard analyses for the session to produce an average 206Pb/238U ratio versus ablation time plot (Figure 5a). This average is more representative of the effects of fractionation than any single standard and has the additional benefits of reducing scatter in the pattern, and thus allowing the calculation of an uncertainty for each time slice of the average. Because we are only interested in the relative change in the ratio with ablation depth, longer-term instrumental drift does not affect the result, and can be corrected separately after downhole fractionation correction (provided, of course, that no drift in the pattern of downhole fractionation occurs). The exponential equation was fit to this average pattern using Igor Pro's built-in curve fitting function, which incorporates calculated uncertainties on each time slice of the average, and iteratively produces a fit that minimizes chi-square using the Levenberg-Marquardt algorithm. Figure 5b, which illustrates 206Pb/238U ratios of the 91500 zircon after correction using the exponential equation derived in Figure 5a, demonstrates the effectiveness of the model, with corrected ratios showing no observable variability with ablation time.
 In order to objectively test the applicability of the fractionation model, three Temora-2 zircon grains analyzed in the same session were used as secondary standards. Again, 206Pb/238U ratios corrected using the same model exhibit no discernable variation with time (Figure 5c), and the weighted average of the ratios (corrected for instrumental drift using the 91500 zircon analyses) of 0.0666 ± 0.0026 is indistinguishable from the “true” 206Pb/238U ratio of ∼0.0668 [Black et al., 2004]. A detailed comparison of the effectiveness of linear and exponential models, including tabulated data, is provided in Appendix A.
 The success of a model incorporating an exponential curve fit is encouraging, but there is no a priori reason for downhole fractionation to follow a “simple” pattern (such as linear, or exponential, changes with ablation time). Indeed, the data in Figure 2d and Figure 3f cannot be satisfactorily modeled using an exponential curve, and are excellent examples of cases that would benefit from a more versatile approach. To this end, we extended the above method by using a smoothed cubic spline fit, which should be capable of reproducing any observed downhole fractionation trend. The approach is essentially the same as that described above, but instead of an exponential curve a smoothed cubic spline was fit to the data using a built-in Igor Pro function called “Interpolate2.”
 Although long ablation times are uncommon in routine U-Pb zircon dating by laser ablation, since drill rates are generally relatively fast, we employed 55 μm diameter spot analyses of 2 min duration in order to exhaustively test the potential of this approach. As with the exponential curve modeling described above, pairs or triplets of the 91500 zircon standard were spaced evenly throughout the experiment, and data from these multiple analyses were combined to produce an average pattern of downhole fractionation (Figure 6a) for the session. Clearly, changes in fractionation pattern with ablation time are complex, and simple models employing a linear or exponential fit would be incapable of effectively modeling these variations. To produce a more appropriate model, a smoothed cubic spline was calculated from the average pattern of the standards. This spline accurately models the fractionation response observed in the standard analyses and, when used to correct fractionation in the standards (Figure 6b), produces corrected ratios that do not vary with ablation time (in contrast, if a simple linear model is employed the average of the standards varies by up to 8% with ablation time).
 The applicability of the model to other zircons was then tested using analyses of the Temora-2 zircon as a secondary standard. Despite the long (2 min) duration of analyses, the six spot analyses for the session have a fractionation pattern (Figure 6c) very similar to 91500. When corrected using the smoothed cubic spline calculated from the average 91500 pattern (Figure 6a), 206Pb/238U ratios of the Temora-2 analyses do not vary with hole depth (Figure 6d), indicating the validity of the fractionation correction employed. When combined to generate a concordia age, the six Temora-2 analyses yield a concordant result of 415.0 ± 2.7 Ma (Figure 6e), which is statistically indistinguishable from the accepted TIMS age for Temora-2 of 416.8 ± 0.3 Ma [Black et al., 2004].
 This experiment was repeated using a spot size of 42 μm, with a total of 8 Temora-2 analyses for the session, bracketed by 12 ablations of the 91500 “calibration” zircon (all analyses were again of 2 min duration). A similar downhole fractionation pattern was observed (illustrated in Figure 7b), although the overall change in elemental ratio was ∼30%, in comparison to a variation of ∼20% in the earlier experiment (Figure 6a). Once again, a smoothed cubic spline, calculated from the average downhole pattern of the 91500 zircon standard, was used to correct all analyses. After correction of downhole fractionation and instrumental drift, the 8 spot analyses of the Temora-2 standard yielded a concordia age of 412.9 ± 2.4 Ma, which is within 1% of the accepted age. We therefore conclude that even the more complex patterns of downhole fractionation remain relatively constant throughout an analytical session (and at a given spot size), and that this behavior can be modeled and then applied to unknowns in the same session providing a robust correction for downhole effects.
6. Depth Profiling as an Example Application of the Method
 One immediate benefit resulting from accurate correction of downhole elemental fractionation is the potential to resolve downhole age variation in complex zircons. To investigate this possibility we simulated the effects of age zoning within a natural zircon grain by bonding together polished wafers of the Plesovice zircon standard (∼337 Ma [Slama et al., 2008]) and the 91500 zircon standard (∼1063 [Wiedenbeck et al., 1995]). We then created a depth profile of this zoned sample by ablating through the ∼10 μm thick Plesovice layer into the underlying 91500 grain (Figure 7a). By employing a spot ablation of 2 min duration the laser “drilled” to a total depth of ∼40 μm, sufficient to sample both of the zircon standards. This experiment can be considered a worst case scenario because the Plesovice zircon has a U concentration more than 5 times greater than the 91500 zircon (465 ppm, compared to 81 ppm), so any contamination of the 91500 portion of the signal by Plesovice (due to memory effects or ablation of the pit walls) would be amplified by the greater U concentration of the latter.
 The depth profiling test was conducted within the same analytical session as the second batch of Temora-2 standards described above (see Figure 6f), using a 42 μm laser spot, and a duration of 2 min per ablation (equating to pit depths of ∼40 μm). The smoothed cubic spline fit to the average of 12 analyses of the 91500 zircon (Figure 7b) described above was used to correct all analyses for downhole fractionation. Figure 7c illustrates the 206Pb/238U ratios in the depth profile after correction for downhole fractionation and instrumental drift, together with relevant baseline-subtracted beam intensities. The first 30 s of the analysis samples only the Plesovice wafer, and yields a concordia age of 334.2 ± 8.7 Ma, statistically indistinguishable from the accepted age. The following ∼20 s of the analysis represent ablation through the wafer boundary/epoxy, and coincides with a noticeable increase in 208Pb due to unavoidable common Pb contamination in the boundary layer. By ∼70 s the elemental ratio reaches a plateau, and a concordia age of the following 40 s yields an apparent age of 1052.4 ± 9.1 Ma. We attribute the offset between this apparent age and the accepted 206Pb/238U age for 91500 of 1065 Ma to minor sampling of the Plesovice wafer at the pit walls (note that the rapid wash out of the Helex cell means that memory effects are unlikely to have affected the result). Based on the published U concentrations and 206Pb/238U ratios of each zircon, we calculate this offset to represent a ∼0.3% contamination of the 91500 result by Plesovice. Given that this is well within the normal reported uncertainties of U-Pb zircon dating of 1 to 4% [Jackson et al., 2004; Klotzli et al., 2009; Kosler and Sylvester, 2003], such contamination is unlikely to significantly perturb depth-profiling results for natural samples.
 The success of this depth profiling example clearly demonstrates the benefit of employing versatile downhole fractionation models, which produced accurate ages, despite long ablation times and an unusually complex fractionation pattern.
7. Quantifying the Uncertainty of Complex Fractionation Models
 Although the methods that we have described here for the modeling and correction of downhole elemental fractionation do not readily lend themselves to a strict arithmetic propagation of uncertainties, we suggest here an alternative and robust methodology employing analyses of the primary standard within the session to estimate the propagated uncertainty of individual analyses. This approach has the advantage of being independent of the downhole fractionation model employed, and inherently propagates most sources of analytical uncertainty for the session. However, because it assumes that the primary standard has identical behavior to unknowns it cannot be used to determine whether differences in zircon matrix, age or U content can affect the ages produced. As such, secondary zircon standards are still required to assess the overall accuracy and reproducibility of analyses.
 Our method for the estimation of analytical uncertainties is similar to the approach conventionally used to assess external reproducibility using secondary standards, with two significant differences:
 1. In lieu of a secondary standard, we remove each primary standard ablation in turn from the “pool” of primary standard analyses and recalculate its corrected ratios independently of the primary standard pool. By removing this analysis from the standard data used to normalize results, it is corrected for downhole elemental fractionation and instrumental drift in a manner identical to the treatment of “unknowns” (Figures 8a–8f). After sequentially treating each standard analysis as an unknown in this way, a pool of “pseudosecondary standards” can be generated, and these can be used to assess the analytical uncertainty.
 2. Instead of estimating a global uncertainty for the entire method, which is then assigned uniformly to all analyses, we generate an “excess uncertainty” for each analytical session that is intended to account for all unquantified sources of analytical uncertainty, then combine this with the internal precision derived for each spot analysis. To estimate the magnitude of this excess analytical uncertainty, we calculate the degree of scatter in the pool of “pseudosecondary standards” (Figures 8g and 8h). If the internal uncertainties of the pseudosecondary standards are insufficient to account for the scatter between analyses, the group will have an MSWD (mean of the squared weighted deviates) of greater than 1, indicating that an additional source of uncertainty exists in the population. This excess error (predominantly associated with downhole fractionation correction and drift correction) can be estimated by calculating the additional uncertainty for each analysis required to produce an MSWD of 1 for the pseudosecondary standards (Figures 8g and 8h). By combining in quadrature this excess uncertainty with the internal error of individual spot analyses, a total error for each analysis is generated. This approach then takes into account differences in internal error between samples, and thus best reflects the actual uncertainty of individual spot analyses.
 The uncertainties generated in this manner will reflect the limitations of downhole fractionation correction, as any variation in fractionation between spot ablations will be reflected in the scatter of corrected ratios. Likewise, the use of an inappropriate model for changes in the ratio with hole depth will contribute additional scatter to the “pseudosecondary standard” pool, and will result in a larger calculated excess uncertainty. In a similar manner, any error in the modeling of instrumental drift will be reflected by a larger scatter in corrected ratios of the “pseudosecondary standards,” again increasing the excess uncertainty required to produce an MSWD of 1.
 Although this method has a significant processing burden, the capabilities of current computers are more than sufficient for the automatic calculation of uncertainties in this fashion, and errors for all corrected elemental and isotopic ratios in Iolite's U-(Th)-Pb dating module are treated in this way. The uncertainties of all corrected ratios reported here were calculated using this method, including the Temora-2 concordia ages of Figures 6e and 6f.
 Despite the capacity of this method to produce accurate estimates of analytical uncertainty, however, we note the following caveats:
 1. Any differences in U concentration and/or age between the primary standard and sample unknowns may alter the relative impact of uncertainties in the subtraction of background noise (i.e., signal-to-noise ratio), and counting statistics.
 2. Analyses of the reference standard should be evenly spaced throughout the run to best reflect the effects of instrumental drift correction on unknowns.
 3. To reliably quantify the “excess uncertainty” for a session, a suitable number of analyses of the primary standard are required. In Iolite's U-Pb package, if too few standard analyses are available the potential for underestimation of the excess uncertainty is catered for by increasing the calculated value gradually, from no increase for 15 or more standards, to a factor of two for 6 standards. If less than 6 standards are used, the software will still allow the user to produce results, but a conservatively large excess uncertainty will be applied to each corrected ratio.
 4. The user is of course always encouraged to employ a number of secondary standards during analytical routines, as this is the only way to assess the external reproducibility of results. The method described here is simply intended to estimate realistic uncertainties for individual spot analysis in a robust, reproducible and objective way.
7.1. A Note on Internal Versus Systematic Uncertainties
 In considering uncertainties and their propagation it is important to distinguish between “random” and “systematic” sources of uncertainty. Any systematic uncertainty is one which generates a bias in a data set, an obvious example for U-Th-Pb geochronology is the uncertainty in the known age of a reference standard. Such systematic uncertainties must be treated differently when generating a weighted average from a group of individual analyses (e.g., a group of unknown zircons from a single igneous rock sample). If the systematic component is propagated into each individual uncertainty prior to the generation of the weighted average the result will have an unrealistically small estimated uncertainty. This is because the systematic uncertainty has been treated as though it were random, and will have been reduced in the weighted average process (by the square root of the number of analyses in the weighted average). Instead, such systematic errors must be kept separate from the weighted average calculation, then propagated into the uncertainty afterward. Differences in the U concentration or age of zircon standards and unknowns will also contribute a systematic uncertainty to analyses.
 The “excess uncertainty” generated using the pseudosecondary standard approach described above is unable to detect systematic uncertainties, which would bias the group of analyses without introducing any additional scatter to the population. Because the pseudosecondary standard approach only considers data scatter, and not accuracy, such biases will not affect the MSWD, and thus will not be incorporated into excess uncertainty calculations. As such, any excess uncertainty generated using the method must be random, and can therefore be propagated with other uncertainties (e.g., internal precision) prior to any weighted average calculation.
7.2. Correlation in Uncertainties
 In addition to the magnitude of uncertainties, the accurate estimation of “error correlation” is also of importance. Although equations exist for the calculation of error correlations based on the uncertainties of bulk analyses, which are often used to great effect in single zircon TIMS dating [see, e.g., Schmitz and Schoene, 2007], the availability of large numbers of individual integrations (“scans”) in laser ablation methods offers the opportunity to employ calculations based upon the raw data populations themselves. To calculate error correlations from the data, all individual time slices for the chosen interval of the sample are combined (for example, if 40 s of data were chosen for a spot analysis, with a data acquisition rate of 8 cycles per second, this would represent a population of 320 individual data points). When the two ratios of interest (e.g., corrected 206Pb/238U versus corrected 207Pb/235U) are plotted on an X-Y diagram, the degree of correlation in the data is identical to the error correlation between the ratios. When calculated in this way, error correlations between individual analyses of a single sample can vary significantly (e.g., error correlations of spot analyses in Figure 6e vary from 0.13 to 0.31, and from 0.09 to 0.29 in Figure 6f), and error correlations themselves vary dramatically with changes in U concentration and age.
 U-(Th)-Pb zircon geochronology by laser ablation ICPMS is both rapid and relatively inexpensive, and has already become an extremely popular and widespread method. At present, the single largest constraint on the accuracy and precision of the technique is the correction of downhole elemental fractionation. Patterns of fractionation with hole depth can vary dramatically both between laboratories, and with changes in operating conditions, and there is no a priori reason for them to be linear. In fact, nonlinear patterns of varying complexity have been observed in multiple laboratories.
 We suggest that as a general approach, users should attempt to develop an appropriate model of downhole fractionation based upon observations of their own data during each analytical session, instead of attempting to fit the data to a preconceived fractionation model. Employing this strategy, we have demonstrated that models of nonlinear fractionation, such as an exponential curve, or smoothed cubic splines, can be used to efficiently correct for the effects of complex downhole fractionation. These models are capable of producing high-quality ages, accurate to within 1% of accepted values, and can be used for demanding applications, such as depth profiling, without compromising data quality.
 Careful attention should be given to the propagation of analytical uncertainties, particularly in relation to downhole fractionation and instrumental drift correction. We provide a method that allows the estimation of analytical uncertainties, using only a single reference standard, in a manner that best reflects the actual uncertainties of individual spot analyses.
 The methodology employed here, including uncertainty propagation, error correlation, is incorporated into the U-(Th)-Pb dating module of the Iolite software package. Further information for Iolite, a freeware program designed for the reduction of time-resolved data, is available from http://iolite.earthsci.unimelb.edu.au/.
Appendix A:: A Comparison of Linear and Exponential Models of Downhole Fractionation
 In Appendix A, we attempt to address in greater detail the potential age effects of using different models of downhole elemental fractionation on zircon data generated by LA-ICPMS.
 In theory, if identical sampling conditions are used for matrix-matched standards and unknowns, the use of a standard bracketing approach should always produce accurate ages, inherently corrected for all sources of bias, including downhole fractionation. There are several reasons why this is not the case in practice, but the most significant is that the user is often unable to sample exactly the same period of data for each analysis. Thus, any variations in corrected elemental ratios with ablation time (as a proxy for hole depth) have an impact on the final calculated age. Common reasons for wishing to use only a portion of the entire analysis include (1) contamination of the grain surface by common Pb; (2) insufficient grain thickness, meaning that the laser “drills” through the back of the zircon grain and begins sampling epoxy/other minerals in a thin section; and (3) penetrating through one zone of a grain into a region of different age.
 Here we compare linear and exponential fractionation models on a group of 10 Temora-2 grains, analyzed as a secondary standard during a normal analytical session. The downhole fractionation was modeled using 13 analyses of the 91500 zircon, grouped in pairs or triplets throughout the session which also included unknowns in addition to the Temora secondary standard. The normal data reduction methods of Iolite's U-Th-Pb DRS (described in Appendix B) were used, but in order to assess variability between analyses the propagation of excess uncertainties was avoided. As such, all uncertainties quoted are likely to be approximately half of their propagated values.
 When fitted to the average of the 91500 analyses, there are clear differences between the linear and exponential models of fractionation (Figures A1 and A2). This is well illustrated by the residuals to the fit, which are a proxy for the effect of the fractionation model on corrected ratios. The linear model produces strong biases in the residuals that vary with ablation time (i.e., hole depth), undercorrecting the early and late portions of each ablation, and overcorrecting the period between ∼10 and 40 s (an overcorrection will result in higher corrected ratios and older apparent ages, and vice versa). The bias in the correction also results in a nonnormal distribution in the scatter of data points within each analysis, as illustrated by the histogram in Figure A1, which is visibly skewed. In contrast, the residuals of the exponential fit (Figure A2) do not vary with ablation time and exhibit no visible bias, resulting in a normal “bell curve” distribution of the corrected data (histogram plot in Figure A2). The standard error of the exponential model fit (2.9) is also significantly lower than that of the linear fit (4.7), again suggesting that the exponential curve is a better model in this case.
 To test the effects of each of these models on the calculated ages of samples, the 10 Temora-2 analyses were reduced using each method. For each spot analysis, three separate sections of the data were processed: (1) the entire spot analysis, (2) the analysis minus the first 10 s (to simulate surface contamination), and (3) the analysis minus the last 10 s (to simulate drilling through the entire grain, or into a region of different age).
 The results of entire spot analyses corrected using each model (Tables A1a and A1b) are on average different in age (418.1 Ma for exponential versus 419.0 Ma for linear), but the disparity is slight, and although the bias may be real (e.g., due to a skewed data distribution), it may also be due to slight differences in drift correction between the models. In contrast, the effect of excluding portions of each analysis is distinctly different for the two models, and cannot be related to drift correction. The average age of analyses reduced using the exponential model is not significantly affected by excluding portions of each analysis, suggesting that no bias exists in the model. However, ages generated from different portions of each analysis using the linear model exhibit significant differences, with the average of analyses where the first 10 s of data were excluded having an average age 1.8 Ma (0.43%) older than the average of entire analyses. Similarly, the ages of analyses where the last 10 s were excluded are on average 3.3 Ma (0.79%) older than the average of entire analyses. These results are in agreement with observations based on Figure A1, and illustrate that at least in this case, a linear model has the potential to introduce significant bias in calculated ages. Individual internal uncertainties in calculated ratios are also marginally (5 to 10%) larger for analyses reduced using the linear model, but this effect is less critical than the apparent impact on the accuracy of results.
Table A1a. Results of Entire Spot Analyses Corrected Using the Exponential Modela
Final 207/235 ± 2 SE
Final 206/238 ± 2 SE
Concordia Age ± 2 SE (Ma)
Weighted Average Age ± 2 SE (Ma)
Average Age (Ma)
Concordia ages were calculated using the “Concordia” function of Isoplot. Associated uncertainties are extrapolated from the 206/238 uncertainty and are typically ∼10% larger than those calculated by Isoplot. Final 207/235 and final 206/238 ratios have been corrected for downhole fractionation using the relevant model, then corrected for instrumental drift using a smoothed cubic spline interpolated through the 91500 analyses. Weighted averages were calculated using Isoplot′s “weighted average” routine, using the individual concordia ages and uncertainties displayed. Normal average values, calculated without uncertainties, are also provided for comparison. Rho is the correlation in the uncertainties of the 206/238 and 207/235 ratios, calculated for each analysis individually from scatter in the corrected ratios.
0.513591 ± 0.009398
0.066663 ± 0.000471
416.7 ± 2.7
0.519286 ± 0.008620
0.065604 ± 0.000480
411.8 ± 2.8
0.507642 ± 0.008727
0.068065 ± 0.000509
423.4 ± 3.0
0.504325 ± 0.008282
0.066641 ± 0.000561
415.6 ± 3.1
0.513888 ± 0.006671
0.067134 ± 0.000407
419.3 ± 2.3
0.513290 ± 0.008886
0.066550 ± 0.000550
416.4 ± 3.1
0.512468 ± 0.006811
0.067232 ± 0.000510
419.6 ± 2.7
0.507560 ± 0.004692
0.067265 ± 0.000399
418.8 ± 2.2
0.506765 ± 0.006547
0.066916 ± 0.000432
417.3 ± 2.5
0.510582 ± 0.006967
0.067696 ± 0.000512
421.4 ± 2.8
418.1 ± 2.2
Missing first 10 s
0.511669 ± 0.010734
0.066542 ± 0.000511
415.8 ± 3.0
0.514807 ± 0.009330
0.065609 ± 0.000513
411.4 ± 3.0
0.511851 ± 0.007646
0.067068 ± 0.000412
418.7 ± 2.3
0.514009 ± 0.010054
0.066705 ± 0.000579
417.1 ± 3.3
0.509900 ± 0.007867
0.067160 ± 0.000512
418.9 ± 2.7
0.507842 ± 0.009703
0.068311 ± 0.000550
424.8 ± 3.2
0.503275 ± 0.009325
0.066785 ± 0.000589
416.2 ± 3.3
0.507076 ± 0.007624
0.067724 ± 0.000538
421.0 ± 3.0
0.506491 ± 0.006971
0.066719 ± 0.000448
416.3 ± 2.5
0.506636 ± 0.004970
0.067047 ± 0.000402
417.7 ± 2.2
417.7 ± 2.4
Missing last 10 s
0.502767 ± 0.008693
0.066052 ± 0.000617
412.6 ± 3.4
0.507414 ± 0.009209
0.067583 ± 0.000542
420.8 ± 3.1
0.511756 ± 0.007248
0.067225 ± 0.000533
419.5 ± 2.9
0.511299 ± 0.009609
0.066357 ± 0.000608
415.2 ± 3.4
0.514664 ± 0.006741
0.067346 ± 0.000444
420.5 ± 2.5
0.521583 ± 0.009074
0.066222 ± 0.000535
415.4 ± 3.1
0.512666 ± 0.010084
0.066559 ± 0.000519
416.1 ± 3.0
0.514156 ± 0.007521
0.067859 ± 0.000574
422.7 ± 3.2
0.506302 ± 0.007125
0.066878 ± 0.000478
417.1 ± 2.7
0.510783 ± 0.004860
0.067281 ± 0.000452
419.5 ± 2.5
418.2 ± 2.1
Table A1b. Results of Entire Spot Analyses Corrected Using the Linear Modela
Final 207/235 ± 2 SE
Final 206/238 ± 2 SE
Concordia Age ± 2 SE (Ma)
Weighted Average Age ± 2 SE (Ma)
Average Age (Ma)
Concordia ages were calculated using the “Concordia” function of Isoplot. Associated uncertainties are extrapolated from the 206/238 uncertainty and are typically ∼10% larger than those calculated by Isoplot. Final 207/235 and final 206/238 ratios have been corrected for downhole fractionation using the relevant model, then corrected for instrumental drift using a smoothed cubic spline interpolated through the 91500 analyses. Weighted averages were calculated using Isoplot′s “weighted average” routine, using the individual concordia ages and uncertainties displayed. Normal average values, calculated without uncertainties, are also provided for comparison. Rho is the correlation in the uncertainties of the 206/238 and 207/235 ratios, calculated for each analysis individually from scatter in the corrected ratios.
0.512038 ± 0.009414
0.066490 ± 0.000489
415.6 ± 2.8
0.519726 ± 0.008737
0.065955 ± 0.000538
414.0 ± 3.1
0.507400 ± 0.008748
0.068156 ± 0.000524
423.8 ± 3.0
0.508554 ± 0.008438
0.066742 ± 0.000561
416.7 ± 3.1
0.515752 ± 0.006680
0.067332 ± 0.000452
420.5 ± 2.5
0.511395 ± 0.009001
0.066611 ± 0.000569
416.4 ± 3.2
0.511368 ± 0.006974
0.067273 ± 0.000537
419.6 ± 2.9
0.510863 ± 0.005051
0.067523 ± 0.000434
420.6 ± 2.5
0.508843 ± 0.006743
0.067294 ± 0.000451
419.4 ± 2.6
0.511534 ± 0.007204
0.067740 ± 0.000560
421.7 ± 3.1
419.0 ± 2.1
Missing first 10 s
0.513802 ± 0.010797
0.066812 ± 0.000531
417.4 ± 3.1
0.516595 ± 0.009481
0.066239 ± 0.000575
415.0 ± 3.3
0.518087 ± 0.007719
0.067780 ± 0.000466
423.0 ± 2.6
0.514820 ± 0.010205
0.067044 ± 0.000594
418.9 ± 3.4
0.515292 ± 0.008191
0.067705 ± 0.000550
422.2 ± 3.0
0.512003 ± 0.009824
0.068763 ± 0.000551
427.6 ± 3.2
0.510237 ± 0.009340
0.067171 ± 0.000569
419.0 ± 3.2
0.511093 ± 0.007987
0.068218 ± 0.000590
423.9 ± 3.3
0.508725 ± 0.007172
0.067227 ± 0.000480
419.1 ± 2.7
0.510646 ± 0.005415
0.067617 ± 0.000443
421.0 ± 2.5
420.8 ± 2.5
Missing last 10 s
0.509008 ± 0.008852
0.066758 ± 0.000644
416.9 ± 3.5
0.510015 ± 0.009321
0.068175 ± 0.000584
423.9 ± 3.3
0.519463 ± 0.007400
0.068042 ± 0.000567
424.5 ± 3.1
0.515763 ± 0.009771
0.066944 ± 0.000635
418.6 ± 3.6
0.515861 ± 0.006800
0.068123 ± 0.000470
424.3 ± 2.6
0.524054 ± 0.009255
0.067042 ± 0.000568
420.0 ± 3.2
0.513554 ± 0.010226
0.066979 ± 0.000543
418.3 ± 3.1
0.520149 ± 0.007609
0.068552 ± 0.000607
426.8 ± 3.3
0.514107 ± 0.007352
0.067794 ± 0.000491
422.5 ± 2.8
0.518296 ± 0.005187
0.068141 ± 0.000480
424.6 ± 2.7
422.3 ± 2.2
 It is important to stress that while the exponential model proved the most appropriate to this data set generated in our laboratory, other laser ablation systems and/or analytical sessions may exhibit differing downhole behavior. Iolite provides the ability to choose the most appropriate downhole fractionation model for any given analytical session and indeed to quickly compare the results of imposing different models onto the same data set.
 In addition to choosing a downhole fractionation model that accurately fits the average of standard analyses, it is important that the fractionation pattern does not vary within a session. Figure A3 provides a view of uncorrected 206Pb/238U ratios for each analysis of the 91500 analysis throughout the period of this test (∼2 h), and demonstrates that the fractionation pattern did not change detectably with time. Figure A4 contains 206Pb/238U ratios corrected using the exponential curve fit, plotted in a manner similar to Figure A3. Each analysis does not change with ablation time (x axis), and there is no change in the analyses throughout the session, indicating that there has been no drift in the downhole fractionation of the Temora zircon, and that the exponential model has appropriately corrected all analyses.
Appendix B:: A Description of Iolite's U-Th-Pb Data Reduction Scheme
Appendix B has been written to provide the user with some insight into how Iolite's “U_Pb_Geochronology” data reduction scheme (DRS) functions. For tutorials covering the use of both Iolite in general and the U_Pb_Geochronology DRS, we refer the reader to the Iolite website (http://iolite.earthsci.unimelb.edu.au/).
 Application of the DRS consists of the following steps (in order): (1) selection of baselines; (2) calculation of baseline-subtracted beam intensities, raw elemental and isotopic ratios, and indicative raw ages; (3) selection of reference standard analyses; (4) interactive modeling of downhole elemental fractionation for each elemental ratio (e.g., 206Pb/238U); (5) calculation of downhole fractionation corrected ratios; (6) estimation of instrumental drift using reference standard analyses; (7) calculation of final drift-corrected elemental and isotopic ratios; (8) selection of optimal regions of the sample analyses for export; and (9) export of final values (this step includes propagation of uncertainties and calculation of error correlations). (Data are calculated progressively, with the results of each step easily available for viewing by the user at all stages.)
 Following this overview, Appendix B is subdivided into two more sections. In section B2, the above processes are described in more detail under the divisions of “general Iolite features,” which covers those parts of data reduction common to any DRS in Iolite, and the additional data reduction operations specific to the U_Pb_Geochronology DRS. Section B3 provides a broad description of the programming code that should allow the interested reader to better understand the functions used by the DRS.
 To summarize briefly, Iolite was developed specifically to treat laser ablation ICP-MS data, although it is also proving to be extremely useful for manipulation of conventional solution ICP-MS and TIMS data. The software is unique in being both powerful (a 5 h U-Th-Pb session could easily involve >160,000 data points for each mass measured) and extremely flexible. Although the Iolite platform itself is encrypted, all data reduction modules are open source, and wherever possible programming of the underlying platform has been conducted with flexibility in mind.
 Iolite allows the user to view all data relative to time, and this has several important implications. First, it is possible to work with many separate files, and with data that contains any mixture of baselines, standards and unknowns, without requiring fixed timing or sample spacing. Second, all stages of data reduction can be viewed against time, meaning that the user is free to use raw beam intensities, uncorrected ratios, final corrected ratios, or any mixture of these, when selecting sample/standard intervals. Third, any interpolation, for example of instrumental drift with time, takes into account the relative timing of analyses, and enables the use of complex splines to model these changes.
 It is also worth noting that Iolite has been designed to work with an entirely generic data format that will accommodate data from any instrument (or even two instruments simultaneously).
B2. Details of the Data Reduction Scheme
 The U_Pb_Geochronology DRS incorporates of a number of discrete stages of data processing. Some of these stages are common to any DRS in Iolite, but others are unique to the U_Pb_Geochronology DRS. A description of those stages common to any Iolite DRS is provided first, followed by an in-depth treatment of the features unique to the U-Th-Pb data reduction, namely, the modeling and correction of downhole elemental fractionation, the propagation of uncertainties, and the calculation of error correlations.
B2.1. General Iolite Features
 For a more thorough explanation of Iolite's features we refer the reader to the Iolite website (http://iolite.earthsci.unimelb.edu.au/), and to the Iolite manual that can be found there. The information provided below is current at the time of publication, but may become outdated as Iolite evolves.
B2.1.1. Baseline Subtraction
 As in any mass spectrometric analysis, it is important to ascertain the level of background noise in any signal. This baseline level can then be subtracted from the total signal to calculate a baseline-subtracted intensity that in this case, represents only the material sampled by the laser. It is common practice in laser ablation studies to analyze a “gas blank,” in which all gas flows and instrumental parameters are identical to those encountered during sampling, but the laser beam is either turned off, or is physically blocked by a “shutter.”
 In Iolite, raw signal intensities can be viewed individually or together, and scaled in such a way that the user can easily assess the background intensities of many beams simultaneously. The time scale can be adjusted to view large periods (e.g., to view drift in background levels throughout an entire session), down to very small details (e.g., to examine an individual period of baseline acquisition), or anywhere in between. Using this information, the user then selects periods of time containing suitable baseline data, which will then be used by Iolite in calculating baseline subtracted beam intensities. If at any stage the user wishes to add, remove, or modify baselines, this can be done quickly and easily, with any changes reflected in the recalculated data.
 In order to subtract baseline intensities from sample and standard analyses, values need to be interpolated between the periods of baseline data. In other data reduction methods, such interpolations often involve a simple linear interpolation of the baseline data immediately adjacent to each sample/standard analysis, or the averaging of a large group of baseline analyses. However, because Iolite works with the time of acquisition of each data point, it is capable of more powerful interpolative methods, such as smoothed cubic splines. These splines can be adjusted to fit more or less strictly through each baseline analysis, and can give varying degrees of weighting to individual baseline analyses, based on the calculated uncertainties of each time period. This means that a few seconds of baseline data will be given less weight than several minutes of high-quality baseline. The resulting interpolated spline can be viewed for each mass analyzed, and any changes to the period of an individual baseline, or to spline parameters (e.g., the degree of smoothing) are instantly displayed. In addition to smoothing splines, a number of different forms of interpolation are available to the user, including more conventional methods.
 An individual baseline spline, spanning the duration of all data being reduced, is calculated for every mass measured. Baseline-subtracted beam intensities are calculated by subtracting the baseline spline from the raw beam intensity at each data point.
 Iolite has a number of options available to use in the calculation of statistics for baseline data, including various forms of outlier rejection and a choice between using “mean” or “median” based approaches. By default, baseline statistics are calculated as the mean of the data, with no outlier rejection. Although outlier rejection is often useful when processing laser ablation data, it can cause significant problems when applied to low-level baselines. This is because, for typical dwell times, a single count will translate into tens of counts per second (for example, a 30 ms dwell time will extrapolate a single count to 33 counts per second). For low background levels (e.g., several counts per second), this effect results in beam intensities consisting predominantly of 0 counts per second, punctuated rarely by counts in multiples of the minimum detection level (33, 67 or 100 counts per second in this case). Statistical calculations using outlier rejection often reject the rare higher values, producing an average lower than the true background level, and an extremely low calculated uncertainty. In contrast, a simple average of all points will (with increasing amounts of data) approach the true value, and should provide a more realistic estimate of uncertainty.
B2.1.2. Modeling of Long-Term Drift
 In Iolite, long-term drift in isotopic or elemental ratios is typically corrected by normalization to reference standards of known composition (i.e., sample standard bracketing). In the U-Th-Pb DRS, a reference zircon of known composition is ablated under the same conditions (e.g., spot size, laser repetition rate, preablation) as unknowns, and normalization is performed on all ratios after the correction of downhole fractionation.
 There is no prerequisite on the number of reference standard analyses, or their spacing, but in the U-Th-Pb DRS “penalty” factors are propagated into the uncertainty of all analyses if less than 6 standard analyses are available for constructing the downhole calibration curves. As with baseline subtraction, Iolite has the option of using a variety of splines to interpolate normalization factors from reference standard analyses through unknowns. These will be most effective if reference standards are interspersed regularly throughout the analytical session.
 By default, a mean with 2 standard deviation outlier rejection is used when calculating statistics for all analyses of reference standards and unknowns.
B2.1.3. Export of Data
 After data processing is complete, the user can export a table of results from Iolite. The results table is in tab-separated format, and can be directly imported into Microsoft Excel. By default, the U-Th-Pb DRS exports data in a format suitable for input into Isoplot using either “Normal” or “Inverse” U-Pb isochrons, although the content of the export table is fully customizable. In addition to a simple table of statistics, it is also possible to export a time series of calculated data, either at the original time spacing of the data, or after down-sampling.
B2.2. Modeling and Correction of Downhole Elemental Fractionation
 An integral feature of the U-Th-Pb DRS is its treatment of downhole elemental fractionation. This feature has been specifically developed for this data reduction scheme, and employs separate windows that allow interactive modeling of the pattern of downhole fractionation for each elemental ratio.
 The modeling and correction of downhole elemental fractionation occurs after baseline subtraction of beam intensities, but before the correction of instrumental drift. By using data that is not drift-corrected some variability is often present between reference standard analyses. However, this variability is seen as a parallel offset of the data, and has no effect on the pattern of downhole fractionation in each analysis, or in the modeling and correction of this effect. It is worth noting here that in treating downhole fractionation the number of seconds since the laser began firing for a particular spot analysis (referred to here as “ablation time”) is used as a proxy for hole depth, which cannot be measured directly. The beginning of ablation is detected using the rate of change in an “index” beam intensity (the 238U beam by default), this approach is highly reproducible, and does not require any extra information from the mass spectrometer or laser software.
 To model downhole fractionation within an analytical session, the data from individual analyses of the reference standard are combined to generate an average pattern of changes in the elemental ratio with ablation time. This averaging generates a more representative pattern, and reduces the effects of signal noise. There is no requirement for the selected segments of reference standard analyses to be the same length, or to be continuous. This means that the user is free to avoid sections of individual analyses that are clearly inaccurate (as may result from, for example, surface Pb contamination, cracks in the grain, or the laser drilling all the way through a thin area of the grain). For each time slice of the average pattern an uncertainty is also calculated; this uncertainty is then incorporated when calculating each model of downhole fractionation (with the exception of the running median).
 In order to remain as flexible as possible, a number of different model types are provided, these include (1) linear, (2) exponential (illustrated in Figure B1), (3) double exponential, (4) combined linear and exponential, (5) smoothed cubic spline, and (6) running median. The DRS has been designed so that the user can freely switch between these different models, and quickly see the effect of each on downhole corrected data.
 Of the above models, the first four employ a simple mathematical equation to calculate drift in the relevant elemental ratio (y axis) relative to ablation time (x axis). In each case, the equation employed is provided at the top of the curve fit window (see Figure A1 for an example). The last two models employ Igor Pro's “smooth” and “interpolate2” functions, using a degree of smoothing controlled by the user.
 The running median calculates the median for a given data point by calculating the median of all points within a window “n” seconds wide, centered on that data point (where “n” is the smoothing parameter specified by the user).
 The smoothed cubic spline uses the method of Reinsch  to fit a smoothed cubic spline through all data points, taking into account the individual uncertainties of each. The smoothing factor specified by the user determines how tightly the curve is fitted through each data point, no smoothing will result in a cubic spline fitted through all data points, and as the factor is increased the spline will become smoother, and eventually approach a straight line.
 When the curve fit window first appears, the default settings (altered in the “Edit settings” window) are used to calculate a model of downhole fractionation for the ratio, based on the average of all selected analyses of the reference standard. The user is then free to alter the model using the controls on the right-hand side of the curve fit window (Figure B1).
 These controls operate in two separate ways, either by masking the beginning and/or end of the average, or by manually altering the parameters of the fit (note that the latter option is unavailable when using “Running median” or “Smoothed cubic spline”).
 Masking of the beginning or end of the average pattern (the gray regions in Figure B1 indicate masked areas) is particularly useful, as there will often be small sections at the start and end of the graph that are averages of only one or two waves, and are thus highly susceptible to the signal noise of these analyses. By masking these portions so that they are not included in model calculation a significantly better fit to the data is often achieved.
 By clicking the “Manual” button, the user is able to deactivate automatic calculation and manually edit all of the model parameters (Note that this is not possible for the Running median or Smoothing spline methods). In this way, the user has complete control over the form of the fit, while still having the benefit of the measures of success of the model (the residuals plot and the “Quality of fit” section of the window).
 The “Quality of fit” window provides additional information to the user on the efficacy of the chosen model of fractionation. The standard error provides an indication of the scatter of the average values after correction using the model, and should decrease as the quality of the model increases. The “Bias of fit” provides an indication of whether the model biases the data toward a higher or lower ratio. If an appropriate model is chosen, this value should be near zero. Finally, all points of the average data are plotted as a histogram. Given that normal statistical methods are employed, it is important that the corrected data have a normal distribution, and the purpose of the histogram is to assess whether this assumption is valid. If data are badly skewed, bimodal, or otherwise differ from a normal “bell curve” distribution, it would suggest that the fractionation model is inappropriate. In addition, it would indicate that the user should consider using a more flexible method of statistics for their data (in some cases it may be sufficient to use Iolite's median-based statistical methods). Note that an ideal bell curve can be toggled on or off behind the histogram to assist in comparisons.
B2.3. Propagation of Uncertainties
 In addition to the internal precision of individual analyses a number of other sources of error exist, and the DRS attempts to account for these during export. Although a number of studies have attempted to quantify individual sources of uncertainty and propagate these appropriately, the use of complex downhole fractionation models in Iolite's U-Th-Pb DRS makes this approach difficult to implement. In addition, this approach requires that all sources of uncertainty are explicitly identified and it remains clear from many laser studies that this is not always the case.
 Iolite's U-Th-Pb DRS employs an approach that requires no a priori knowledge of the source of uncertainties, instead using analyses of the reference standard as “pseudosecondary standards” by removing them individually from the data set, and reprocessing the data. Although this approach is extremely computationally intensive, it can be used in combination with complex downhole fractionation models, and has the added benefit of inherently including unidentified sources of error. However, because the method estimates uncertainties based on the reference standard, it is important to understand that the relative contribution of different sources (such as baseline noise) to unknowns and the reference standard may differ with factors such as U concentration or age. It is therefore useful, as always, to use a reference standard of a similar composition and age to unknowns, and to use secondary standards to assess accuracy and precision. The procedure employed in propagating uncertainties is described in detail in the manuscript, and is not repeated here. For more detail on the underlying programming code, please refer to section B3.
B2.4. Calculation of Error Correlations
 The calculation of error correlations in Iolite's U-Th-Pb DRS is simple, but is also arguably the most accurate method, as it employs all available information (in contrast, for example, to some arithmetic approximations). To calculate the correlation in the uncertainty of two ratios (e.g., 206Pb/238U versus 207Pb/235U), a built in Igor Pro function called “StatsCorrelation” is employed. The function takes all data points within the relevant time period (e.g., an analysis of a sample unknown) and tests whether any correlation exists in the variation of the ratios. If the data points are visualized as an X-Y diagram of the two ratios of interest, the function is testing whether the distribution of the data is random (this would appear as a “shotgun” plot, with the data points scattered evenly in all directions), or whether the data cloud forms an ellipse (indicating correlation in the scatter of the two ratios). The ellipse will become more elongate as the degree of correlation between the ratios increases, and will slope diagonally upward if the correction is positive (as is normally the case), or diagonally downward if the correlation is negative.
B3. Summary of the Programming Code
 The following description is intended to assist the avid reader in understanding the U-Th-Pb DRS. The entire programming code is easily accessible from within Iolite, simply select “Edit the active DRS” from the “Iolite” menu after selecting the U-Pb method in the main control window. Igor Pro colors the programming text to enhance readability, right clicking on any function will also allow the user to view associated help files (for built in functions), or to skip to a user function. Note however that the underlying programming code of the Iolite platform is encrypted, as such some of the functions used in the DRS are not available for viewing by the user.
 The below description refers to the current copy of the “U_Pb_Geochronology” DRS programming code. Please bear in mind that it is likely to evolve with time, so the attached notes should be used only as a guide. To make use of the page number references, simply copy and paste the programming code from Iolite into a blank Microsoft Word document.
B3.1. Page 1: Top
 The first half of this page contains features that are common to all Iolite DRSs, including definition and default settings of parameters to be editable within the “Edit settings” window, and a version number for the DRS.
B3.2. Page 1: “Function RunActiveDRS()”
 This is the beginning of the DRS. References are made to the global variables and strings defined in the header of the file. The curve fitting names provided in the user interface are shortened for convenience.
B3.2.1. Page 1: “//Do we have a baseline_1 spline…”
 The DRS checks if a baseline spline exists for the index channel (this can be set in the “Edit settings” window). If no baseline exists the execution of the DRS will halt, with a message to the user stating that no baseline spline was found.
 If a baseline is present, the DRS will reference global strings containing lists of inputs (this list is populated during import of data), intermediates (empty) and outputs (empty).
 The Index time wave is created. All intermediate and output waves will be interpolated onto this time wave, so that all data points can be compared sensibly (inputs may have different time spacing of data points, so cannot necessarily be compared directly). The number of points in this index wave is stored for use in the creation of all other waves from this point on in the DRS.
B3.2.2. Page 1: “//THIS DRS IS A SPECIAL CASE”
 In order to quickly reprocess data after curve fitting and during export, the DRS is divided into sections using an “If” function: this allows early portions of the calculation (everything up to and including curve fitting) to be skipped if not required (i.e., if none of the parameters affecting them have changed. The global variable “OptionalPartialCrunch” is used as a flag to turn on or off this feature.
 If a full data calculation is selected (this is the default case), baseline subtraction and interpolation of input waves onto the index time wave is achieved using the “$InterpOntoIndexTimeAndBLSub(IndexChannel)” function. This is first performed separately on the index wave (selected in the “Edit settings” window), then on each of the available input waves using a loop that steps through each item in the list of inputs. In each case the generated intermediate wave is also added to the list of intermediate channels so that it can be viewed by the user.
 This is followed by the generation of a mask wave based on a threshold in CPS for the index channel. The mask is given a value of 1 where the index channel is above this threshold and NaN (not a number, this is the equivalent of an empty cell in a spreadsheet) if below. By multiplying waves by this mask any time intervals where the index channel is below the set threshold are masked (this is particularly useful when viewing ratios that tend to become very noisy at low signal intensities).
 Before proceeding with any U-Th-Pb related calculations a check is performed to determine whether all required masses are present. This is achieved by referencing the required waves, then checking that the reference is valid. Because some mass spectrometers produce isotope formats (e.g., Pb206) and others produce a simple mass format (e.g. m206) an “If” statement is used to check whether channels are present in either format. In addition, a check on whether the optional 204Pb channel is present is conducted. If present a flag is set that is used in the rest of the DRS to calculate 204Pb related ratios.
 The raw ratios are calculated, and multiplied by the mask wave described above. Note that 235U is calculated from the 238U beam using a ratio of 137.88 Simple age estimates are also generated based on each elemental ratio. Each of the intermediate waves is then added to the list of intermediates for viewing by the user. Finally, raw 204Pb related ratios are generated if 204Pb is present.
 A built in function is used to determine each opening of the laser shutter, based on the rate of change of the index channel intensity. The function generates a wave that lists, for each point, the number of seconds since the last shutter open event. This wave is used in downhole related calculations.
B3.2.7. Page 2: “DRSAbortIfNotWave(ioliteDFpath…”
 It is checked whether any time periods of the reference standard have been selected. If not, the DRS does not proceed further.
B3.2.8. Page 3: “DownHoleCurveFit(“Raw_208_232”…”
 This uses a separate DRS function called “DownHoleCurveFit” to commence the process of interactive downhole fractionation modeling by the user. The function is described in detail below, and works with a single raw ratio, seen here in green.
 Because of the ability for the DRS to skip segments of the data calculation any waves generated beyond this point need to be added to the lists of intermediates and/or outputs prior to the following “else” statement. If relevant, 204Pb related ratios are also added.
B3.2.10. Page 3: “//THIS IS A BIG ELSE…”
 The “else” statement references waves so that they can be used later in the function. The remainder of this function after “endif” is executed whether a full or partial data crunch is used.
B3.2.11. Page 3: “OptionalPartialCrunch = 0”
 The optional partial crunch is set back to the default (a full calculation).
 Each of the waves that will contain downhole corrected values is created.
B3.2.14. Page 3: “strswitch(ShortCurveFitType)”
 This string switch references the waves generated in the “DownHoleCurveFit” function for each ratio in turn (their names are dependent on the type of fractionation model used). In each case, the downhole correction is also carried out, using an equation identical to that displayed in the fit window, or in the case of the running median and smoothed cubic spline, using the spline itself. All calculations are relative to “beam seconds,” which is a measure of the number of seconds since the laser shutter last opened, and is a proxy for hole depth.
B3.2.15. Page 4: “DCAge207_235 = Ln((DC207…”
 The calculated downhole fractionation corrected elemental ratios are used to generate downhole corrected age estimates.
 Waves for each of the Pb/Pb isotope ratios are generated. No downhole correction is conducted, so the waves are made equal to the raw ratios. If necessary, 204Pb related ratios are also calculated.
 Waves to hold drift-corrected ratios are created. After the creation of these waves the DRS function “DriftCorrectRatios()” (described below) is used to calculate drift corrected ratios based on integrations of the reference standard.
B3.3. Page 5: “Function DriftCorrectRatio(…”
 This function generates ratios corrected for instrumental drift. It is called by the DRS function, but is also called during export by the error propagation function. Note that the function is capable of working with a subset of the data, a feature that is used during error propagation.
B3.3.1. Page 5: “//the next 5 lines reference…”
 The global settings at the top of the file are referenced here so that they can be used in the function.
B3.3.2. Page 5: “Wave DC206_238= $ioliteDFpath(…”
 Waves generated previously by the DRS function are referenced so that they can be used in the function. If 204Pb waves are present these are also referenced.
B3.3.3. Page 5: “string ListOfSplinesForRecalc…”
 Before proceeding, it is necessary to make sure that all splines required by this function exist and are up to date. To do this a string containing a list of the required splines is created, and the “RecalculateIntegrations” function is passed this string to conduct the update.
 The splines, which may have anything from 2 to 200,000 points depending on the spline type used, are interpolated onto the index time wave. After doing this, each data point of the spline will directly correspond to the same data points in all other intermediate or output waves. The Iolite function “InterpSplineOntoIndexTime” is used to do this.
 Final corrected ratios are generated by normalizing the downhole corrected ratio to the reference standard using the reference standard spline for each ratio. This is followed by the calculation of indicative corrected ages for each elemental ratio. Finally, if 204Pb is present, drift corrected 204-related isotope ratios are generated in the same way.
B3.4. Page 6: “Function DownHoleCurveFit(”
 This function begins interactive modeling of downhole fractionation. It has the capability of staggering each window, using an optional window number. It is passed a specific raw ratio, in this case an elemental ratio such as 206Pb/238U.
B3.4.1. Page 6: “//the next 5 lines reference all of the…”
 As with other functions, these lines reference the global variables and strings at the header of the file, so that they can be used in the function.
B3.4.2. Page 6: “string ShortCurveFitType…”
 As in the main DRS function, the long descriptive names of fractionation models for the user interface are shortened.
B3.4.3. Page 6: “String WindowName…”
 A check is performed to see if the window already exists. If it does, it is deleted.
B3.4.4. Page 6: “Wave/z Index_Time…”
 The index time wave and index channel wave are referenced, and a check is conducted to see that they both exist. If either is missing an error is reported and DRS execution ends. Similarly, the reference standard matrix and beam seconds waves are referenced. A variable containing the number of points in the index time wave is created; this is used in the creation of any new waves.
B3.4.5. Page 6: “variable thisinteg=0…”
 Variables and strings required for a “do – while” loop are defined. The loop cycles through each integration of the reference standard, creating waves for each integration containing the relevant ratio, and associated beam seconds value (time since shutter open). Each of these ratio waves are smoothed slightly to remove outliers, which are replaced with a 9-point running median value.
B3.4.6. Page 7: “//special case of first waves…”
 To create the interactive window, the first waves created by the above loop need to be specifically referenced.
 Parameters for the size and position of the window are defined. A conversion between the different display coordinates of Mac and Windows operating systems is also performed.
B3.4.8. Page 7: “Display/W=(WindowLeft…”
 The curve fit window is created, and waves are appended to it.
B3.4.9. Page 7: “variable LowestBeamSec…”
 Parameters for a loop that runs through each integration of the standard to determine the smallest and largest beam seconds values that are recorded; these limits are used when creating an average wave to ensure that it spans the range of all integrations. The loop also appends each wave in turn to the window graph.
 The total required points and time spacing of the average wave are determined.
B3.4.11. Page 7: “string AverageName = …”
 A number of strings are created for use in the below loop. The loop cycles through each beam seconds interval of the average wave, and for each point uses an inner loop to create an average of all of the individual integrations that are present. The loop is flexible, so can accommodate the (very common) case where the user selects slightly different lengths of each analysis of the reference standard, or avoids spurious sections of the analysis. The built in Igor Pro function “Wavestats” is used to calculate the mean and standard deviation of each time interval. If the uncertainty fails to calculate and returns NaN, an uncertainty equal to the average is used.
 By the time the loop is finished, an average wave spanning all individual integrations of the reference standard has been populated with the average of all individual analyses for each point in beam seconds. It is this wave that is used in curve fitting, calculation of residuals, etc.
B3.4.12. Page 8: “AppendToGraph/W=$WindowName…”
 The average wave is appended to the graph.
B3.4.13. Page 8: “string HistogramWaveName…”
 Waves to be used in evaluation of the quality of the fit are created, with names specific to the ratio being fitted.
B3.4.14. Page 8: “string AutoOrManual…”
 The curve fit has been made to allow manual or automatic modeling of fractionation. By default this option is set to automatic, but it can be changed to manual by the user.
 A test is done to see if this is the first time the curve fit has been conducted, and if so, a number of variables containing measures of the quality of the fit and other parameters are created. Parameters for the position of controls in the window are also defined.
B3.4.15. Page 8: “strswitch(ShortCurveFitType)…”
 Up to this point, everything in the function has been universal, but from now on the action of the function depends on whether manual or automatic curve fitting is selected, and on the type of curve fitting chosen.
 A strswitch function based on the fractionation model chosen is used.
 For each case most features are identical and consist of the following
 1. If the fit has not been done before, global variables holding the parameters of the fit are created (e.g., “variable/g $ioliteDFpath(”CurrentDRS“,”LEVarA_“+ratio) ”).
 2. The global variables are referenced.
 3. A separate DRS function called “FitToAverageWave(ratio, ShortCurveFitType, AutoOrManual),” which is described below, is called to conduct the actual curve fit.
 4. A number of additions and changes are made to the position and state of controls in the window. These are specific to each fit type.
 Additional controls common to all model types are added to the window, these include the evaluation of the quality of the fit in the bottom right of the window, and the masking controls.
 Following this, the “MaskStartOrEnd(ratio, WindowName, ”StartMask,“ Start_MaskForFit)” DRS function (described below) is called, it draws gray boxes over the masked portions of the data and recalculates the curve fit if changes are made by the user.
 The remainder of the function appends the residuals wave to the bottom left of the graph, and makes a number of changes to the appearance of the window.
B3.5. Page 11: “Function FitToAverageWave(ratio…”
 This function automatically fits a model to the average wave (of all integrations of the reference standard) using the fractionation model selected by the user.
 The function begins by creating appropriate names and references for number of items based on the name of the ratio being used.
B3.5.1. Page 12: “StartPoint = BinarySearch…”
 The range of the average wave to be used in doing the modeling is bracketed by the start and end points here. These are calculated using the masking parameters set by the user.
B3.5.2. Page 12: “strswitch(ShortCurveFitType)…”
 This strswitch determines how the fitting is to be performed, based on the fractionation model chosen by the user. In each case, global variables holding the coefficients of the model are referenced. These globals are used as initial guesses for the model if possible, as well as being put into the window controls to display the values to the user, and finally they are used to correct downhole fractionation during data crunching.
B3.5.3. Page 12: “if(Variable_a == 0 …”
 This “if” statement tests whether previous values have been set for the global coefficient variables. If they have not then the model is fitted without using initial guesses. If they have been set (either by previous automatic modeling, or by manual adjustment by the user) then they are used as initial guesses for the curve fit. If the user is manually adjusting the model then no curve fit is done, but the coefficient variables are updated for use in the below step, and in correction of downhole fractionation.
B3.5.4. Page 12: “duplicate/O Average…”
 In order to display the fitted curve on the graph a duplicate of the average wave is made. The values of this wave are calculated using the coefficient values determined by either automatic or manual fitting. After the fitted curve is calculated, the residuals to the fit are calculated for each point by subtracting the fitted model from the average wave.
 Note that for the running median and smoothed cubic spline options coefficients are not used to calculate the fitted model. Instead, the original average wave is duplicated and smoothed/splined, the resulting wave is then used directly in correcting the effects of downhole fractionation. In addition, the smoothed cubic spline has been given the added functionality of being extended to higher and lower beam seconds values, meaning that it can potentially model downhole fractionation beyond the extent of the range of available reference standard data. Because all functions such as linear or exponential curve fitting have a simple equation, they can also be extrapolated to lower or higher values of beam seconds.
 The residuals wave is duplicated, and NaNs (NaN = not a number) are removed from the wave. The resulting wave can then be used to calculate the standard error and bias of the residuals to the fit.
B3.5.6. Page 14: “string HistogramWaveName…”
 Waves (with names specific to the ratio being fitted) are created for use in creating a histogram of the residuals. This histogram can then be used to visually assess whether the data are normally distributed.
 This control toggles whether the user wishes to see an “ideal” bell curve for the residuals. This bell curve is plotted behind the data, and can be useful in determining whether the data are normally distributed. If the curve is selected the histogram wave is duplicated, and a Gaussian curve is fitted to the data.
 This function is related to the “setvariable” controls that allow the user to choose how much data to mask at the start and end of the average wave. An “if” statement determines whether the control is related to the start or end mask and acts appropriately.
 In either case, the appropriate visible area on the graph to be masked is calculated, and a gray box is drawn (after first deleting any existing box already drawn).
 This function is related to the auto and manual buttons that allow the user to choose between automatic or manual fitting of the curve to the average wave. The function activates or deactivates controls as appropriate for the selection and changes the color of the auto and manual buttons to reflect the change. Finally, the “FitToAverageWave” function is called using the new settings.
 This hook function detects when the window is deactivated and checks whether any changes to the fractionation model have been made. If changes have been made the data is recalculated using a “partial data crunch.”
B3.10. Page 17: “Function ResetFitWindows()”
 This function is not available via the normal interface. If typed into the command line it resets the fractionation modeling (for the chosen model type only). This can be useful in diagnosing problems, or in an experiment that has become unstable.
 This function is called during data export, it propagates errors for each of the output waves using the “pseudostandard bracketing” approach described in the manuscript. In short, the function generates a wave containing calculated excess uncertainties for each of the output waves, these are then combined in quadrature with the internal uncertainty of each spot analysis in the export function (called “ExportFromActiveDRS”).
B3.11.1. Page 17: “string currentdatafolder = …”
 As in other functions, these lines reference the global variables and strings in the header of the DRS file for use in this function.
B3.11.2. Page 17: “string BackupMatrixName…”
 The following lines reference then duplicate the reference standard matrix after creating names for the duplicates. These duplicated copies are used in the remainder of the function as working copies of the matrix that can be altered during calculations. Note that the matrices are killed if they exist, this is to avoid memory effects from previous executions of this function.
 The number of integration periods (rows) in the reference standard matrix is determined. This is used later in loops, and in making waves that require one point per integration period.
 The list of output channels is also referenced, and the number of output channels is determined. This information is used in the below loop to cycle through each of the outputs in turn.
 Following this, a wave is formed to hold the calculated 1 standard error uncertainty for each output channel. It is this wave that is used in the export function to combine internal and excess uncertainties (in quadrature).
B3.11.4. Page 18: “if(NoOfStdIntegrations<6)”
 This “If” statement is used to allow the function to add an additional factor to the calculated excess uncertainty if a small number of standard analyses are used. If less than six standard analyses are found the normal estimation method is not used, and a conservative excess uncertainty is applied to each output. These values are also printed to the history area.
 If less than 15 standard analyses are found then excess uncertainties are calculated normally, but they are multiplied by a factor, beginning at 2 (for 6 standard analyses) and decreasing to 1 (i.e., no change) for 15 or more standards.
B3.11.5. Page 18: “else//otherwise at least 6 standard…”
 Within this portion of the if-else-endif statement the excess uncertainty is estimated. Temporary waves are first made that will hold calculated values. Next, the index time wave is referenced so that the original data wave can be accessed. This is followed by the definition of variables required in the following loop.
 The loop cycles through each integration period of the reference standard matrix (i.e., each row of the matrix), removing each row in turn, and recalculating drift correction for only that time period using the remaining standards. At this point, no statistics are calculated, but the corrected ratio waves are altered for each integration period of the reference standard using the above independent drift correction.
 Note that baseline subtraction of beams does not need to be recalculated, as this is independent of the standards. Importantly, downhole fractionation also does not need to be recalculated, as it has been defined by the user, and cannot be automatically changed after the removal of a standard. Any variability in the accuracy of the downhole correction should be reflected by the corrected ratio of the standard, so this is acceptable.
 Having now populated the data points of each integration period with independently drift corrected values (i.e., drift calculated without using that particular standard integration period) the Iolite function “RecalculateIntegrations” can be used to populate the “CalcdErrorsMatrix” with the statistics of each. This matrix now contains, in effect, a population of pseudosecondary standards that can be used to assess the degree of scatter that can be expected in the unknowns. To put it differently, the variability of this matrix includes all contributions of uncertainty to the analyses, and can thus be used to assess the total variability of the method. Given that the internal precision of all analyses are known, this information can then be used to calculate the excess uncertainty required to explain the total scatter of the data.
 Temporary waves are created that will hold intermediate values for the estimation of external uncertainties. Variables are made for use in the below loops.
 The outer loop steps through each output channel in turn, populates the temporary waves with appropriate results from the above loop, then initialize variables for the inner loop. The average value of all integration periods and the minimum internal error of the group are also calculated.
 The inner loop calculates the MSWD for the group of “pseudosecondary standards,” then iteratively adds an excess uncertainty to the internal error (in quadrature, using 1 standard error in each case) until the MSWD is within 0.2% of 1. The MSWD is then stored (after breaking the loop), after being normalized to the average. Note that it is necessary to convert the excess uncertainty to a relative number as it may be applied to ratios of very different magnitudes.
 After execution of the inner loop a “penalty factor” is applied if less than 15 standard integration periods were used, and the resulting calculated excess error is printed to the history window.
 Finally, a reference to the wave holding the results of this function is passed back to the calling function (the export function of the DRS).
 This function is provided two columns in the output table to perform error correlations on, it calculated an error correlation for each row, and places the result in the specified location, with the specified name. The function is general enough that it can be called easily for the calculation of error correlations. In the U-Th-Pb DRS it is used to calculate error correlations for the ratios most commonly used by Isoplot.
 The function first checks whether the provided columns exist in the output table, then defines variables for use in the below loop. It also inserts a new column in the output table, and gives it the label provided.
 The loop cycles through each row of the output table in turn. First, a check is performed to make sure that the rows both contain values. The start and end times are then extracted from the output table. These times are then converted into points in the relevant output wave, and the range bracketed by these start and end points is duplicated for each of the ratios to be correlated. These duplicated portions of the ratio waves, representing all data points within the integration period, are then used to calculate the correlation between the two ratios using the Igor Pro function “StatsCorrelation.” Because the ratios are assumed to be homogeneous over the integration period, any scatter in the data points is due to noise in the analysis. Thus, any correlation in this noise between the ratios indicates a correlation in their uncertainties.
B3.12.1. Page 19: “Function ExportFromActiveDRS…”
 This function is common to all DRS routines. It intercepts the export of data by Iolite, giving the DRS an opportunity to alter portions of the export data table before it is saved. In the U-Th-Pb DRS it makes two alterations to the data. First, it calls the function to calculate excess uncertainties (“$Propagate_UPb_Errors()”), then propagates these with internal errors for each integration period. It then calls “ErrorCorrelation(…” to insert error correlations into the output data table. It also generates inverted ratios for use in Isoplot.
B3.12.2. Page 20: “wave ExcessErrorsWave = …”
 The function for estimation of excess errors is called. Following this, a number of variables are defined for use in the below loops.
 The outer loop cycles through each output channel in turn, and combines the internal errors with the calculated excess external errors in quadrature.
 The inner loop cycles through each integration period (row) in turn, calculating the errors in quadrature. This loop is required so that formatted text with more than five significant figures can be used. Note that 1 and not 2 standard errors are used when combining uncertainties in quadrature. These are expanded back to 2 standard errors when placed in the output table.
B3.12.3. Page 20: “variable ColumnBeforeInsert…”
 A column is inserted to hold the inverted 206/238 ratio for use in the inverse format of Isoplot. Because of the limitation of Igor Pro's num2str function (which limits precision to 5 decimal places), a loop is used to invert each ratio and convert its uncertainty, then place these values back into the output table using “sprintf.”
B3.12.4. Page 20: “duplicate/O $ioliteDFpath…”
 To calculate the error correlation using the inverse ratio an appropriate output wave needs to be made. This is done here.
 This is a function that can be used in any Iolite DRS. It allows the user to click the orange buttons in the top left of the Traces Window. These buttons set up waves to be viewed and their scaling as specified in this function. This function is for the baselines button, and has been configured for viewing the most commonly required baselines for U-Th-Pb analyses, using scaling appropriate to the Varian. The user is free to change these default settings if desired.
 This function is nearly identical to the above, but is designed for viewing raw ratios of the 91500 zircon standard. Its default settings are also completely editable by the user.
 We gratefully acknowledge Balz Kamber and Sebastian Meffre for the provision of U-Pb data from their laboratories at Laurentian University and CODES (University of Tasmania), respectively. We thank George Gehrels, Jan Kosler, Jeff Vervoort, and an anonymous reviewer for their feedback, which greatly improved the manuscript. Thanks also to Roger Powell for initial advice regarding error propagation.