Notice: Wiley Online Library will be unavailable on Saturday 27th February from 09:00-14:00 GMT / 04:00-09:00 EST / 17:00-22:00 SGT for essential maintenance. Apologies for the inconvenience.
Magnetic hysteresis data are centrally important in pure and applied rock magnetism, but to date, no objective quantitative methods have been developed for assessment of data quality and of the uncertainty in parameters calculated from imperfect data. We propose several initial steps toward such assessment, using loop symmetry as an important key. With a few notable exceptions (e.g., related to field cooling and exchange bias), magnetic hysteresis loops possess a high degree of inversion symmetry (M(H) = −M(−H)). This property enables us to treat the upper and lower half-loops as replicate measurements for quantification of random noise, drift, and offsets. This, in turn, makes it possible to evaluate the statistical significance of nonlinearity, either in the high-field region (due to nonsaturation of the ferromagnetic moment) or over the complete range of applied fields (due to nonnegligible contribution of ferromagnetic phases to the total magnetic signal). It also allows us to quantify the significance of fitting errors for model loops constructed from analytical basis functions. When a statistically significant high-field nonlinearity is found, magnetic parameters must be calculated by approach-to-saturation fitting, e.g., by a model of the form M(H) = Ms + χHFH + αHβ. This nonlinear high-field inverse modeling problem is strongly ill conditioned, resulting in large and strongly covariant uncertainties in the fitted parameters, which we characterize through bootstrap analyses. For a variety of materials, including ferrihydrite and mid-ocean ridge basalts, measured in applied fields up to about 1.5 T, we find that the calculated value of the exponent β is extremely sensitive to small differences in the data or in the method of processing and that the overall uncertainty exceeds the range of physically reasonable values. The “unknowability” of β is accompanied by relatively large uncertainties in the other parameters, which can be characterized, if not rigorously quantified, through the bootstrapped distribution of best fit models. Nevertheless, approach-to-saturation fitting yields much more accurate estimates of important parameters like Ms than those obtained by linear M(H) fitting and should be used when maximum available fields are insufficient to reach saturation.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
Curiously, however, little attention has been paid to quantifying the signal/noise ratio in hysteresis measurements and to the uncertainties in the parameters and the magnetization curves derived from hysteresis loops. In comparison, related magnetic unmixing approaches based on remanent magnetization curves [e.g., Dunlop, 1972; Robertson and France, 1994; Heslop and Dillon, 2007] have been subjected to much more rigorous scrutiny [e.g., Egli, 2003, 2004a, 2004b]. Published hysteresis data are most commonly reported only in terms of summary parameters (saturation magnetization MS, saturation remanent magnetization MRS, coercivity HC and remanent coercivity HCR) and ratios thereof, typically in graphical form on “Day plots” [Day et al., 1977; Dunlop, 2002a, 2002b] or “squareness-coercivity plots” [Nagata, 1961; Tauxe et al., 2002; Wang and Van der Voo, 2004], with at most a few full loops shown as representative examples. This makes it difficult to assess data quality and to fully appreciate the significance of data points located in any particular region of the Day plot.
In the laboratory database of the Institute for Rock Magnetism are more than 35,000 hysteresis loops measured on two Princeton MicroMag vibrating sample magnetometers (VSMs), by a large number of investigators, on a wide variety of materials, and over a broad range of temperatures. Signal/noise ratios vary enormously, as do the proportions of ferromagnetic, ferrimagnetic, paramagnetic, diamagnetic and antiferromagnetic contributions to the total signal (Figure 1). Recurring issues that arise in the processing and interpretation of these loops have led us to define some simple quantitative measures of data quality, which we describe in this paper.
Conventional loop processing (as in the MicroMag software, for example) involves several basic operations: (1) fitting the high-field data (the portion of the loop with H/Hmax above a specified threshold, where the ferromagnetic (in a broad sense) moment is assumed to be saturated) with a line, whose intercept is taken as MS and whose slope is taken to represent the (field-independent) susceptibility χHF of paramagnetic, diamagnetic and/or antiferromagnetic materials; (2) subtraction of the linear contribution χHFH from the measured M(H) to obtain a slope-corrected loop, assumed to represent the hysteresis behavior of ferromagnetic material in the specimen; and (3) finding the intercepts of the slope-corrected loop on the vertical and horizontal axes (MRS and HC, respectively). An optional additional step in the MicroMag software is the computation of, and correction for, vertical and/or horizontal displacements of the loop, based on inequality of the positive and negative axis intercepts, to correct for sensor offsets. More advanced processing techniques include: (4) decomposition of the loop into “reversible” (or “induced hysteretic”) and “irreversible” (or “remanent hysteretic”) components Mih(H) and Mrh(H) (see Figures 1 and 6 for examples) [e.g., von Dobeneck, 1996; Fabian and von Dobeneck, 1997]; (5) decomposition into analytical basis functions and isolation of “partial loops” of distinct coercivity classes [von Dobeneck, 1996]; (6) “unmixing” in terms of canonical empirical and/or theoretical functions [Carter-Stiglitz et al., 2001; Dunlop and Carter-Stiglitz, 2006]; and (7) nonlinear high-field M(H) fitting to characterize the approach to saturation of hard magnetic phases and obtain improved estimates of MS and other parameters [e.g., Fabian, 2006]. An even wider variety of parameter calculations opens up when the loop is supplemented with additional data such as the initial magnetization curve starting from MRS (the Msi curve of Fabian and von Dobeneck , called the “ZFORC” by Yu and Tauxe ); derived quantities include the MBF (H) curve and its field axis intercept HCR [Fabian and von Dobeneck 1997], transient energy dissipation (“TED”) [Fabian, 2003], and the “transient hysteresis” [Yu and Tauxe, 2005].
All of these calculations and the parameters derived from them have accompanying uncertainties related to inevitable experimental imperfections (noise, drift, etc.) and subtle mathematical complexities (ill-conditioned inversion). Our primary goal in this paper is to take some important initial steps toward a longer-term goal of developing rigorous estimates of uncertainty in all of the properties determined by processing of hysteresis data. We begin with the essential step of quantifying the uncertainty in the individual magnetic moment measurements in the loop, and defining a simple, objective, quantitative measure of the signal/noise ratio. Although this can be done most directly by measuring multiple loops and calculating a mean and standard deviation for each point, we focus here instead on developing a method that can be applied to the measurements in a single loop, because (1) there exists an enormous amount of valuable previously measured hysteresis data for which replicate measurements are unavailable but which can be reanalyzed using suitable methods and (2) in certain circumstances it may be impractical to make replicate measurements. The quantified signal/noise ratio provides a criterion that can be used in deciding whether filtering of the loop data is likely to result in more reliable estimates of fundamental properties. A second significant step that we take here is to define some simple statistical tests that use the individual measurement uncertainties to evaluate certain models by testing for lack of fit to the data set. In particular, we develop tests for linearity of M(H), in selected high-field intervals or for the whole loop, and for “unmixed” models using analytical basis functions. We apply these tests in addressing the important and difficult problem of robust quantification of parameters describing the nonlinear high-field M(H) behavior of unsaturated materials.
The organization of this paper essentially follows a general processing protocol for hysteresis loops, which inevitably contain both random noise and systematic errors related to instrument drift and sensor offsets. We begin with the concept of loop symmetry, which is essential to our development of self-contained single-loop noise quantification. We develop a robust method for quantifying loop shifts (offsets of the center of symmetry from the origin), and we discuss some critical limitations on correcting for them. Noise quantification follows directly from these considerations. Before we can address the question of high-field M(H) fitting, it is necessary for us to discuss prerequisites including drift correction and filtering of loop data. Finally we evaluate different methods of analyzing the approach to saturation [e.g., Fabian, 2006], and we show that robust estimation of related model parameters is far more difficult than is generally appreciated.
2. Inversion Symmetry
Ideal hysteresis loops possess perfect inversion symmetry: for each point (Hi, Mi) on the loop, a symmetrically equivalent point (−Hi, −Mi) obtained by inversion through the origin also lies exactly in the loop. This statement, based on experience and intuitively appealing, can be rigorously justified in terms of the invariance of Maxwell's equations under time reversal [e.g., Jackson, 1975], as long as full saturation is achieved during the measurement. It is obvious that, with rare exceptions, real hysteresis loops possess this sort of symmetry, to such an extent that deviations from symmetry can in most cases be attributed entirely to experimental error (noise, drift, and/or sensor offsets).
There are, however, important exceptions. In general, inhomogeneous interacting materials can be far from saturation in the typical peak fields (1–2 T) used for VSM loop measurements, and are thus not guaranteed to yield loops that are symmetric about the origin, or even to have strong symmetry at all. For example, in exchange-coupled multiphase systems, loops may exhibit approximate symmetry about a point with a large positive or negative field coordinate, and furthermore they commonly exhibit a significant lack of symmetry [e.g., McEnroe et al., 2007; Harrison et al., 2007; Fabian et al., 2008; Lindgård, 2009]. Similarly, a center of (approximate) symmetry may be shifted vertically in multiphase systems after in-field cooling (FC), when one phase acquires a very hard FC remanence (MR,FC) and the others remain soft enough to approach saturation in the applied fields of the low-temperature loop (e.g., siderite and magnetite [Housen et al., 1996]). The superposition may yield a loop with inversion symmetry about a center at (0, MR,FC) [Housen et al., 1996], but perfect symmetry cannot necessarily be safely assumed in such a situation. In fact, symmetry failures of varying degrees are almost inevitable when multiple magnetic phases, with differing coercivity distributions, each have loops shifted by differing horizontal distances, as can easily be confirmed by simple forward models (not shown). Nevertheless, our experience overwhelmingly supports our intuition that symmetry is the rule in hysteresis loops measured on geological materials, especially at room temperature, and we proceed here to make use of it, without forgetting these caveats and the lack of universality.
respectively, where M+(H) and M−(H) are the upper and lower branches of the hysteresis loop. The Mrh curve, equivalent to the ΔM curve of Tauxe et al. , has even symmetry (reflection symmetry across the M axis; Figure 1), and the Mih curve has odd symmetry (180° rotation or inversion through the origin; see, for example, Figure 6). The median fields of these curves, Hrh and Hih, characterize the separate coercivities of remanent and induced hysteretic magnetization processes [Fabian and von Dobeneck, 1997].
The analysis in the remainder of this paper assumes that a measured hysteresis loop is the sum of a signal with inversion symmetry (whose center may be offset from the origin), random high-frequency noise and low-frequency drift. When these assumptions are valid, the upper and lower loop branches can be treated as replicate measurements for quantification of error and for statistical tests for (non)linearity. And even when the assumptions are not fully valid, some of the methods that we propose here should still provide useful ways to examine and characterize phenomena like exchange bias that result in deviations from origin-centered symmetry.
3. Center of Symmetry and Quantification of Noise
A significant displacement of the center of symmetry is immediately evident to the eye when a loop is compared with its symmetric equivalent, inverted through the origin (Figure 2). However, robust calculation of the center location turns out to be a nontrivial problem. Simple approaches often work well with high signal/noise ratios and/or with loop shifts parallel to one of the coordinate axes, but may fail with noisy data or with oblique shifts. Vertical and horizontal displacements can be difficult to separate: for example a vertically shifted loop will generally have as a result unequal horizontal axis intercepts (Hc+ ≠ −Hc−), which may be mistaken for a horizontal shift. Similarly a horizontally shifted loop with nonnegligible paramagnetic slope will yield asymmetric intercepts (MR+ ≠ −MR−) from the high-field slope fitting, and this may be mistaken for a vertical shift. For a perfectly linear loop (e.g., for a paramagnet), it is impossible to distinguish the horizontal and vertical components of any offset. Given sufficiently clean data, iterative centering based on intercept asymmetry works well (and is the approach used in the MicroMag software), but this becomes problematic when noise levels are relatively high and uncertainties in the intercept values are consequently large.
A centering correction based on averaging symmetrically equivalent point pairs was described by von Dobeneck ; this approach results in perfect symmetry for the corrected loop, but does not yield estimates of the offsets and may result in some distortion. The lower branch of the measured loop is inverted point by point through the origin, producing a set of symmetrically equivalent points that ideally (in the absence of noise, drift and offsets) duplicate those of the upper branch. Measurements are made at fields that are close to, but generally not exactly equal to, the fields specified for the loop; however, it is a simple matter to interpolate moments at fields exactly corresponding to the measured fields of the upper branch, for quantitative comparison. A few such symmetrically equivalent point pairs are indicated by tie lines in Figure 2a. If the original loop has a purely vertical shift, then the inverted loop has an equal and opposite vertical shift (and tie lines of constant length, independent of field), so symmetric averaging can be expected to remove the shift without introducing any distortion.
In contrast, if the measured loop has a horizontal offset (as in Figure 2a), symmetric averaging as proposed by von Dobeneck  necessarily produces a loop that differs from the original in its shape. This can be understood by considering the Mrh and Mih curves of a horizontally shifted loop. The Mrh peak is also shifted horizontally, and the inverted loop will have an Mrh curve with oppositely directed offset. The average of the two loops will therefore have an Mrh distribution that is broader than the original (and bimodal if the shift is large enough). Similarly, imagine a loop with a simple Mih curve, resembling a horizontally shifted tanh curve, with a unique point of maximum slope at the center of symmetry (H0, M0). The Mih curve of the inverted loop has a slope maximum at H = −H0; the averaged loop consequently must have an Mih curve with a shape differing from that of the measured loop. Therefore, this centering approach is strictly valid only for vertically shifted loops, although it also effectively removes horizontal shifts (albeit without quantifying them), and does not appreciably distort the loop shape if the horizontal shift is small enough. For purely vertical shifts, a robust calculation of the displacement can be obtained even for quite noisy data. Additionally, this approach allows correction of closure errors due to instrument drift, as discussed below.
We have evaluated a number of alternative methods for robust calculation of both vertical and horizontal components of loop shifts (i.e., calculation of the center of symmetry (H0, M0)), and many of them work well under favorable conditions. For example, the Mrh curve is independent of vertical loop offsets, and therefore the center of symmetry or center of mass of Mrh(H) can be used to isolate the horizontal displacement of the loop [e.g., Fabian et al., 2008]. This approach fails, however, when the remanence ratio (MRS/MS) is low (and Mrh is consequently weak and noisy), or when drift is significant (and Mrh is consequently asymmetric). Our preferred approach is similar to that of von Dobeneck  but adds an explicit horizontal offset Hoff (as an estimate of H0) in the construction of the symmetrically equivalent loop. The lower branch of the loop is inverted through (Hoff, 0); that is, each measured point with coordinates (H, M) is mapped into a new point at (−H −2Hoff, −M). M values are then interpolated for this inverted half-loop at field values equal to those of the upper branch of the uninverted loop for quantitative comparison (as in Figure 2a). When the trial value Hoff equals the true horizontal shift H0, the inverted loop has the same horizontal offset as the measured one, and the tie lines linking equivalent point pairs all have the same length. A plot of M+(H) versus Minv−(H, Hoff) is therefore linear when Hoff is an accurate measure of the horizontal loop shift, and is curved otherwise (Figure 2b). We find H0 by systematically varying Hoff to obtain the best linear relation of M+(H) and Minv−(H, Hoff), as quantified by the correlation coefficient R2. The intercept of the best fit line corresponds to 2M0. Because the function R2(Hoff) is in most cases a well-defined parabola (Figure 2b), an efficient algorithm for function minimization/maximization (e.g., Brent's algorithm [Press et al., 1986]) converges rapidly and very accurately to the maximum. And because this method uses all of the data of the loop, it is much more robust than methods using only the loop intercepts.
The strength of the correlation of M+(H) and Minv−(H, H0) is closely related to the signal/noise ratio for the loop: after the effects of horizontal offsets are removed from the data (Hoff = H0), remaining deviations from perfect correlation of symmetrically equivalent points are due solely to noise and drift (and possible inherent asymmetry due to exchange bias or other phenomena). From the standard definition of correlation for a least squares best fit line,
where the yi are the interpolated values of Minv−(H, H0), and the circumflex and overbar denote best fit and mean values, respectively, we can take 1/(1 − R2) as a quantitative measure of signal/noise (s/n), and we take Q = log10(s/n) as a convenient measure of data quality. High Q values (e.g., 2 or greater) indicate that deviations from inversion symmetry due to all possible sources (noise, drift, and inherent asymmetry) are small. Conversely low values indicate that at least one of these is significant. Strongly magnetic samples commonly yield loops with Q > 3 (s/n > 1000); when Q falls below 0.3 (s/n∼2) it becomes difficult (or impossible) to compute meaningful parameters for the loop (see Figure 1 for examples).
Once the center of symmetry is established, we construct a grid of regularly spaced field values spanning the range used in the loop measurements, at a spacing comparable to the average measurement field step, and we follow von Dobeneck  in interpolating moment values at the grid fields. Rather than using a relatively wide moving window quadratic fit to smooth the data while gridding, we use simple two-point linear interpolation, in order to preserve as much as possible of the noise distribution for subsequent statistical tests. Horizontal and vertical offsets are removed from the gridded data. Noise and drift can be assessed by inspection of an error curve err(H) constructed by subtracting the inverted image of the lower branch from the upper branch (Figure 1):
Note that the interpolations required, first for centering and then for gridding, inevitably result in some smoothing (as can be clearly seen, e.g., in Figure 1d by comparing the loops before and after centering and gridding). Therefore, strictly speaking, we underestimate the noise variance to a small extent in calculating Q and in the statistical tests that we propose below. Nevertheless we find the data quality indicators Q and err(H) to be very useful in the decision tree for automated processing of loop data. Note also that high-field slope correction affects M+(H) and Minv−(H) equally, and therefore does not affect the noise quantification, but it may strongly affect the mean square signal strength, and therefore Q. For this reason we calculate a separate quality factor Qf for the ferromagnetic loop (obtained by correction for the linear high-field slope, as discussed below).
4. Corrections for Drift and for Pole Piece Saturation
“Drift” denotes spurious changes in measured signal strength that occur on time scales comparable to that of the loop measurement. It is typically manifested by failure of loops to close, by lack of even symmetry in Mrh, and/or by failure of Mrh to decrease to zero in large (positive or negative) fields. It may be caused by various phenomena including temperature change in the specimen; physical displacement/reorientation of the entire specimen or of particles within it; or slow changes in the measuring instrument (e.g., vibration drive instability, electronic drift, sensor thermal variation). Depending on its cause, drift may be a linear or highly nonlinear function of time or of field. Similar observed behavior may also be produced in some cases by phenomena more pertinent to sample magnetic characteristics, including viscous magnetic relaxation and irreversible magnetization changes in fields around or above the maximum applied field.
For loops that fail to close, the initial measurement in the maximum positive field does not equal the final measurement in the same positive field, and we can define the closure error as Mce = M1 − MN = err(Hmax) − err(−Hmax). The simplest and most intuitive approach for drift removal is to treat the drift as linearly additive (i.e., to assume that Mce is the sum of incremental errors that accumulated at a constant rate), and to distribute the correction accordingly over the entire loop. A prorated correction ΔMi = Mce ((i − 1)/(N − 1) − 1/2) (where 1 < = i < = N) can be added to each measurement to ensure closure, and it averages to zero over the loop (producing no vertical shift). Unfortunately, Mce is not a robust parameter; it can have a very high relative uncertainty for noisy data sets. Moreover for reasons we will discuss below, this sort of correction often yields unreasonable results, e.g., with M+(H) < M−(H) and Mrh(H) < 0 for wide ranges of H (Figure 3c).
The approach of von Dobeneck  involves symmetric averaging of the upper and lower half loops, as described in section 3, and then vertically shifting the symmetrically averaged half-loops by half their intertip separation, guaranteeing exact closure (Figures 3b, 3c, and 3f). This approach is also intuitive, and somewhat more general than the simple prorated correction. It works well in many cases where Mce ∼ 0 but where the loop is distorted by drift effects in intermediate fields (Figures 3e and 3f). However, this method, like the prorated correction, fares poorly in most cases where Mce is large (Figures 3b and 3c).
Drift can be characterized and quantified most generally in terms of the trend of the error curve err(H) (Figure 3); random noise alone results in “flat” error curves (Figure 1). Note that any inherent loop asymmetry related to exchange bias or other phenomena will also result in structured (i.e., nonflat) error curves; here we will assume that a structured error curve indicates drift behavior, but the alternative possibility should always be borne in mind. Accurate correction for drift is complicated by uncertainty in how the correction should be distributed between the upper and lower loop branches and positive and negative fields. Recall that the error curve values for positive fields are computed by subtracting the inverted negative field lower branch values from the positive field upper branch values (equation (4)). A correction to remove the positive field error curve trend from the data could therefore be applied to the upper branch of the loop over positive fields, or to the lower branch over negative fields, or any equivalent linear combination of these. The same is true for the negative field error curve trend.
Thermal drift is common when loops are measured as a function of temperature, if the wait time is not long enough for sample temperature to equilibrate completely. Even at room temperature, thermal drift can be significant if chilled water is used to cool the electromagnet (as it is in our lab); the temperature in the pole gap is typically a few degrees lower than room temperature. Thermal effects are greatly amplified in slope-corrected loops for specimens with high ratios of paramagnetic/ferromagnetic magnetization (Mp/Mf) in strong fields (e.g., Figure 3a). For example, if Mp/Mf ∼20 in an applied field of 1T, and if absolute temperature varies by ∼2.5% (e.g., from 19.5 to 20 K) during the loop measurement, the paramagnetic closure error is equal to ∼50% of Mf. In our experience the most significant drift problems originate in this way.
In a simple model of this phenomenon we assume that a specimen, initially at temperature T0, is warming or cooling to the ambient temperature T1 in the magnetometer according to
(“Newton's Law of Cooling”), and that the loop measurements are made at uniform time intervals (constant slew rate and uniform step size). The rate constant c depends mainly on specimen dimensions and on the thermal inertia of the specimen holder. The paramagnetic moment varies inversely with absolute temperature and proportionally with applied field. The drift effects resulting from the dependence M(H(t),T(t)) may take a variety of forms depending on the magnitudes and rates of change and on specimen properties. For the model in Figure 4, T0 = 19.5 K, T1 = 20 K, and 1/c equals three fourths of the total loop time; temperature does not fully equilibrate before the loop is completed. Taking Mp = 40Ms at (T1 = 20 K, μ0Happ = 1T), the resulting slope-corrected loop and error curve (Figure 4) strongly resemble those in Figure 3a. The prorated drift correction yields an extremely constricted loop (Figure 4, gray) with the problematic feature M−(H) > M+(H) over most of the field range. The true error due to varying paramagnetic susceptibility (red curve) necessarily vanishes in zero field; it forms a closed loop in negative fields; and it diverges strongly in positive fields. For this reason, the best drift correction that can be made based on err(H) is confined to positive fields: Mcorr+(H) = M+(H) − err(H) and Mcorr−(H) = M−(H) − err(−H) for H > 0, and no correction is applied to the negative field half of the loop. This yields a more “reasonable”-looking result (Figure 4, green, and Figure 3d) than the prorated drift correction (Figure 4, gray). In particular, it yields relatively accurate estimates of MR, HC, and the Mrh curve, all of which are badly erroneous after distributing a drift correction over the entire loop. In a similar model (not shown) in which T decreases from 20.5 to 20 K during the measurements, the prorated correction again yields a strongly inaccurate picture of the true loop shape (in this case showing a much broader Mrh curve), whereas a correction based on err(H) and confined to positive field again produces an acceptable approximation of the true loop.
Unfortunately, however, this approach cannot be applied universally. For cases like that of Figure 3e, with little or no closure error but with significant drift (of unknown origin) in intermediate fields, a correction applied entirely to the positive field half of the loop (not shown) increases the distortion rather than removing it. A systematic method for drift correction that works well in almost all cases is to smooth the error curve using a moving window quadratic fit (to separate the longer-period drift from the high-frequency noise) and determine the field at which the maximum absolute value occurs. If the largest errors occur in strong fields (e.g., >0.75Hmax), as in Figure 3a, the closure error should be removed by subtracting the smoothed positive field half of the error curve from the positive field upper branch of the loop, and subtracting the smoothed negative field half of the error curve (reflected to positive fields) from the positive field lower branch of the loop. In cases where the largest errors occur in smaller fields (as in Figure 3e), i.e., where drift has produced distorted loops without large closure errors, better results are obtained by an approach similar to that of von Dobeneck , distributing the corrections uniformly across positive and negative applied fields.
Another experimental artifact of particular importance for the characterization of high-field behavior is the approach to saturation of the electromagnet pole pieces, which changes the instrument sensitivity slightly during the high-field measurements, an effect which can be considered as a particular form of drift. In VSMs of the “Mallinson design” [Mallinson, 1966; Hoon, 1983], two sensing coils are mounted on each pole face, in series opposition, oriented with their normals parallel to the field and to the induced moment and perpendicular to the direction of sample vibration; this is the geometry of the Princeton VSMs. The specimen flux interacts with the permeable pole pieces, such that the flux coupled to the sensing coils is greater than it would be (for the same specimen moment) without the pole pieces. The effect is equivalent to magnetic “images” of the sample in the pole pieces [Hoon and Willcock, 1988]. As long as the pole piece permeability remains constant, the moment calibration is independent of field; but as the pole pieces approach saturation and their permeability drops, the images in effect fade away, and the apparent magnetic moment of the sample decreases (Figure 5). The effect is generally significant only in applied fields greater than 1 T. Numerous repeated calibration runs using Ni standards show that the system nonlinearity in high fields is very reproducible, and (surprisingly) nearly independent of the size of the air gap. Accurate compensation for this effect is critically important for evaluating the nonlinearity of sample magnetization in high fields. We find that the data of Figure 5 for fields exceeding 1 T can be fit very well by a 4th-degree polynomial:
where a0 = 0.7957, a1 = 0.6957, a2 = −0.8908, a3 = 0.5091, a4 = −0.1097, and the even coefficients change sign for negative fields. The exact form of this is likely to differ for other instruments, but once it is established a significant improvement in data reliability results from recalibration of all data measured in applied fields Bapp(=μ0Happ) greater than 1 T.
Simple numerical filtering techniques such as moving window polynomial fitting can be quite effective at reducing the high-frequency noise in hysteresis data, but they generally fail to counteract longer-period noise, and they often fail to eliminate physically unreasonable features from noisy data. With few exceptions, “well-behaved” hysteresis loops exhibit certain mathematical properties, including the following:
1. There is monotonic change in (ferromagnetic) magnetization during each half-loop field sweep, i.e., dM/dH > = 0 everywhere. This property can be proved [Brown, 1978, section 4.2] for systems obeying micromagnetic equations. Theoretical counterexamples are known, e.g., for anisotropic Stoner-Wohlfarth ensembles aligned at a high angle to the field [see, e.g., Newell, 2005, Figure 4]. However, in natural materials with distributions of particle size and orientation, these exceptions are rare.
2. There are no inflections in the first and third loop quadrants as the magnetization relaxes from saturation in diminishing fields, with M″(H) < = 0 (concave-down curvature) in the first quadrant, and M″(H) > = 0 (concave-up curvature) in the third. Exceptions to this are seen in single-particle micromagnetic models [e.g., Yu and Tauxe, 2005, Figures 6 and 7]; however, for measurements on real specimens with distributions of particle size, shape and orientation, experience shows such exceptions to be rare. Deviations from this rule also arise in connection with metamagnetism, e.g., in low-T, high-field (up to 7 T) measurements on siderite and vivianite [Frederichs et al., 2003], and magnetic cooling field memory effects [Smirnov and Tarduno, 2002; Smirnov, 2006], but these too are rare exceptions to a generally valid rule. In our experience, deviations are usually a good reason to look carefully for experimental problems.
3. There is at least one inflection in the magnetizing (second and fourth) loop quadrants, in which the magnetization is driven from saturation remanence in one polarity to saturation magnetization in the opposite. This is not the case for non-remanence-carrying (e.g., superparamagnetic) ensembles.
Well-designed whole-loop fitting techniques (e.g., the hyperbolic function approach of von Dobeneck ) are preferable to moving window smoothing (and to whole-loop Fourier filtering [e.g., Jackson et al., 1990]) because they are better at helping to ensure that these physically reasonable mathematical properties hold. They also provide much better (less noise-sensitive) estimates of the loop intercept parameters MR and HC than can be obtained by local polynomial fitting.
However, there are limitations to the accuracy with which a predefined set of analytical basis functions can reproduce real hysteresis behavior. For example, the hyperbolic basis function set used by von Dobeneck  contains only singly inflected tanh curves with which to model the induced hysteretic magnetization. In our experience these are sufficient in most cases, but they fail badly for relatively square loops, for which Mih(H) inevitably has several inflection points (Figure 6). Similarly, Mrh curves with the plateau-and-shoulder form that is characteristic of single-domain demagnetization curves [e.g., Dunlop and Özdemir, 1997, section 11.2] cannot be modeled accurately as a sum of sech curves with nonnegative coefficients. The potential need for additional sets of basis functions was foreseen by von Dobeneck , who discussed the advantages (improved fitting accuracy) and disadvantages (increased nonuniqueness and complexity of interpretation) of adding more functions to the unmixing problem. On the one hand, if we wish to obtain meaningful values for the function coefficients, representing the contributions of different coercivity classes [von Dobeneck, 1996], the simplicity of a single set of functions is a decisive advantage. On the other hand, if we wish to use the fitting approach primarily as a filter, to obtain the best possible noise-free representation of the loop and its component curves Mrh(H) and Mih(H) (from which we can calculate reasonable values for the standard parameters MR, MS, χHF, HC, Hrh, etc), then it is necessary to expand the basis function set to include curves better able to match the behavior of SD grains.
For this purpose we use “double-logistic” curves. The sigmoid logistic function, used e.g., for modeling population growth, is closely related to the hyperbolic tangent:
A generalized form with horizontal and vertical scaling and offset parameters [Richards, 1959] is
where A and C are the lower and upper asymptotes, b is a scaling parameter that controls “growth rate,” and the maximum slope occurs at x = x0. Double-logistic curves can be constructed by defining sigmoidal half-curves for x ≤ 0 and then using odd or even symmetry to generate values for x > 0 (Figure 7a). For example, A = −1 and C = 0 can represent the negative field half of the odd functions used to model Mih(H). A family of double-logistic basis functions is created by field scaling (through suitable choices of b and x0) as was done by von Dobeneck  for the hyperbolic basis functions (Figure 7b). We find that a total of 60 basis functions (30 hyperbolic and 30 double-logistic), with suitable field scaling, is generally adequate. We also add a linear function to the basis set to represent para/diamagnetic behavior. The least squares fitting problem consists of calculating optimum weights for the functions, i.e., calculating the set of linear coefficients [von Dobeneck, 1996]. Like the approach-to-saturation fitting discussed in section 7 below, this is an ill-conditioned inverse problem with strongly correlated basis functions. Moreover we require nonnegative coefficients. We use an iterative algebraic approach similar in essence to that of von Dobeneck  to solve the nonnegative least squares problem, and for the relatively square loops obtained for samples containing materials like fine-grained hematite or SD magnetite, we obtain much better fits using the combined set of basis functions (Figure 8).
However, it is important to recognize that fitting errors are commonly significant, even with the additional basis functions. We can evaluate the misfit in terms of an F test, based on a noise variance estimate obtained through the assumption of loop symmetry (section 3) and subject to the same caveats.
6. F Test for Lack of Fit
We evaluate the significance of lack of fit through analysis of variance (ANOVA), which involves partitioning of the total misfit into components due to random noise and to systematic deviations related to inaccurate model predictions. We assume that inversion symmetry holds and that we have successfully centered the hysteresis loop. Then an inversion of the lower branch yields a replicate set of measurements for the upper branch. In effect, aside from small differences in the field values, we have two repeat measurements at each field, which provide a model-free estimate of the measurement error. This makes it possible to apply a “lack-of-fit” test to the data based on an analysis of variance [e.g., Davis, 1973]. Such a test determines whether variations from a given model are significant or mainly due to measurement errors.
The ANOVA calculations begin with the total sum of squares for the dependent variable
where the summation is over the upper branches of the measured and inverted loops, thus comprising all N measured values. Mmean is the average of all the loop moments, and the sum of squares due to regression is
where Mfit,i is the value obtained by summing the least squares best fit weighted basis functions. SSR quantifies the variation in the data set that is “explained” by the model dependence of M on H. The remaining “unexplained” variation (noise and/or lack of fit) is
and the goodness of fit is given by
The total misfit SSD can be partitioned into variation due to “pure error” (random noise) and variation due to lack of fit (model inaccuracy), by treating the upper branch and the inverted lower branch measurements as replicates, whose differences (after quantification of and correction for any loop shifts and drift) are due to noise alone:
The variation due to lack of fit is then determined as
For N measurements in the complete loop we have N/2 replicates, and one degree of freedom for each replicated point. The mean square pure error (MSPE) is SSPE divided by the number of degrees of freedom:
The mean square error due to lack of fit (MSLF) has N/2 − P degrees of freedom (where P is the number of model parameters),
and an F test for lack of fit is based on the variance ratio MSLF/MSPE. When this ratio is small, the misfit is mainly due to random noise (MSLF ≪ MSPE), and we have no reason to reject the null hypothesis that the weighted sum of analytical basis functions accurately describes the M(H) dependence. When the ratio is large (MSLF ≫ MSPE), the total misfit is due mainly to lack of fit (i.e., model inaccuracy), and we reject the null hypothesis. The critical value is given by the F distribution with (N/2 − P, N/2) degrees of freedom; for typical loop data sets with hundreds of measurements, a best fit 60-parameter model can be rejected with 95% confidence when F exceeds a value of about 1.5. For the loop shown in Figures 6 and 8, the lack of fit is very significant for the hyperbolic basis functions alone (FLF = 855, Figure 6) and even for the combined basis function set (FLF = 35, Figure 8), because the data quality is so high and MSPE is so small.
Nevertheless many loops for weakly magnetic materials have signal/noise ratios that are too low to obtain meaningful parameter estimates without whole-loop filtering. For the data set in Figure 9, Qf = 0.37, and windowed polynomial fits near the axis crossings yield rather problematic parameter estimates; in particular the calculated Mr is negative. In the IRM database, over 1000 loops (∼3% of all measured) have low enough signal/noise ratios that they give physically unreasonable parameter values (Mr < 0, Ms < 0, Ms < Mr, and/or Hc < 0) when calculations are based on local polynomial fits. Yet for the vast majority of these there is a discernible signal, obscured but not completely lost in the noise, for which reasonable parameter estimates can be obtained by whole-loop filtering (e.g., Figures 9c and 9d). When the lack of fit is not statistically significant (FLF ≤ 1.5), parameters are much more reliably obtained from the filtered loop.
7. Linearity Tests and Nonlinear Fitting
For many natural materials, hysteresis loops are very well approximated by straight lines, due to low concentrations of ferromagnetic minerals and resultant dominance of the field-dependent magnetization by paramagnetic (or even diamagnetic) minerals. The excellent sensitivity and stability of the MicroMag instruments allows very weak ferromagnetic loops to be isolated from a much stronger linear background, with good precision and accuracy. However, there are obviously limits, and we find it useful to quantify the significance of any nonlinearity through ANOVA.
We take linearity as a null hypothesis, and fit a line to the entire loop by least squares, yielding a best fit slope a and intercept b, and as in section 6 we calculate sums of squares SST, SSR (=Σ(Mfit,i − Mmean)2, where Mfit,i = aHi + b), and SSD. The total misfit SSD can again be partitioned into variation due to “pure error” (random noise) and variation due to lack of fit (in this case nonlinearity of M(H)), by equations (13)–(16). The F test for nonlinearity is again based on the variance ratio MSLF/MSPE. When this ratio is small, the misfit is mainly due to random noise (MSLF ≪ MSPE), and we cannot reject the hypothesis of purely linear M(H) dependence. When the ratio is large (MSLF ≫ MSPE), the misfit is due mainly to lack of fit, and we reject the null hypothesis. The critical value is given by an F distribution with (N/2–2, N/2) degrees of freedom; for typical loop data sets with dozens to hundreds of measurements, linearity can be rejected with 95% confidence when F exceeds a value of about 1.25.
The same sort of F test can be applied to the high-field portion of a hysteresis loop to evaluate the likelihood that the ferromagnetic signal has reached saturation. We test for high-field linearity/saturation using different threshold fields (60%, 70% and 80% of the peak field). The null hypothesis is again that magnetization is a linear function of field. Large values of FNL70% indicate that the loop contains significant reproducible deviations from linearity (e.g., curvature) over the interval 0.7Hmax < ∣H∣ < Hmax, and that the hypothesis of saturation must therefore be rejected. If the FNL60%, FNL70% and FNL80% tests rule out linearity for those segments, we apply a nonlinear approach-to-saturation fit [e.g., von Dobeneck, 1996; Fabian, 2006]; otherwise we use the saturated segment to calculate linear high-field slope (χHF) and saturation magnetization (MS). Figure 10 shows several examples of loops with statistically significant nonlinearity in the high-field range.
Nonlinear fitting of the high-field magnetization is described by
[Fabian, 2006], where the coefficients α, β, a−1 and a−2 must all be negative to describe the approach to saturation. In (17) the magnetization is a linear function of the model parameters, which can therefore be computed by least squares methods; (18) requires alternative approaches [Fabian, 2006], to which we will return after a brief discussion of (17).
For a set of model parameters P including χHF, MS, a−1 and a−2, and a set of fields H1, H2, …HN, a synthetic or predicted data set is calculated by
or M = HP. The inverse problem of calculating P from an M(H) data set can be considered as an unmixing problem, with the columns of H as the basis functions and the model parameters as the weighting factors. The least squares best fit solution for the parameter set is P = (HTH)−1HTM. The matrix HTH quantifies the variance and covariance of the basis functions. In ideal inverse problems, the basis functions are orthogonal (covariances are zero), and model parameters can be determined robustly. The nonlinear high-field parameter determination is a very ill conditioned problem, because the functions involved are strongly covariant over the range of interest: for example −H−1 and H have a correlation of R2 = 0.991 for the typical fitting range of μ0H between 0.7 and 1 T. This results in large and strongly correlated uncertainties in the values obtained for the parameters of the nonlinear model, and the parameter estimates are much more noise-sensitive than they are for a linear fit. Moreover, incomplete or inaccurate compensation for drift and for pole piece saturation may greatly affect the results of the nonlinear fitting.
To fit the parameter set of (18), we take trial values of β between −2 and −1, at increments of 0.1, and for each fixed value βi we solve the linear inverse problem to obtain χHF(βi), MS(βi) and α(βi) by least squares. The value of β that yields the best overall fit to the data can be taken as the optimal estimate. Fabian  notes that physically meaningful values of β may extend beyond this range: expected values for samples with high defect densities are −1 < β < 0. However, as β approaches 0, the nonlinear and constant basis functions become indistinguishable and the matrix HTH becomes singular. Therefore we restrict the range of trial values to −2 ≤ β ≤ −1, although in many cases it is clear that better fits could be obtained for higher or lower values (e.g., in Figure 11, R2 increases monotonically from β = −2 to β = −1). In general, however, the goodness of fit depends very weakly on β. The strong covariance of the basis functions ensures that a variety of different models can fit the data almost equally well (e.g., in Figure 11b, R2 ranges from 0.9999820 to 0.9999905). It is conceivable that when a well-defined maximum R2 (minimum SSD) is located within the search range, confidence intervals could be defined in terms of χ2, but in practice we find that most often the “best” solution lies outside the constrained range.
Strong sensitivity of best fit parameters to small changes in the data is the hallmark of ill-conditioned inverse problems. This is very clearly shown by a “bootstrap” analysis of the uncertainty in the model parameters [Efron, 1981; Tauxe, 1998]. We randomly resample the high-field data and repeat the best fit calculations, as described in the previous paragraph, 1000 times. The resampling is performed “with replacement,” so that some data points are selected more than once and others are not selected, and the number of points included in the calculation is the same each time. For most data sets, various random resamplings produce best fit β values covering the entire range −2 ≤ β ≤ −1. In Figure 11c we show the distribution of best fit models in the four-dimensional parameter space, projected onto a plane with α on the horizontal axis and values of the remaining parameters (χHF, MS and β) on the vertical axis, normalized to their respective means for the 1000 best fit solutions. For the high-quality (Q∼2.9) data set of Figure 11a, 1000 random draws yield a mean best fit value of βmean ∼−1.3, with very strong concentrations at β = −1 (β/βmean∼0.75) and β = −2 (β/βmean∼1.5), due to our truncated search range. For the large population of best fit parameter sets with β = −1, the strong covariance of the other parameters is evident: solutions with relatively high (i.e., less negative) estimates for α also have high estimates for χHF and low values for MS. The set of solutions with β = −2 likewise has corresponding values of α, χHF and MS that lie along clearly defined linear trends. Overall best fit MS estimates vary by roughly ±50% from the mean. All best fit α values are negative, consistent with the F test results ruling out high-field linearity (for which α would equal zero). Extrapolation of the trends to α = 0 gives χHF and MS values equal to those obtained by linear fitting, as expected.
Our fitting approach above differs from that of Fabian , who linearizes the problem by taking the second derivative of (18) to eliminate the linear and constant terms, and then taking logarithms to get
A graph of log∣M″(H)∣ as a function of logH thus yields a line with slope β −2 and an intercept from which α may be determined. Because differentiation amplifies any noise in the data set, the second derivative is extremely noise-sensitive, and Fabian  recommends beginning with a smooth interpolating function. Contrary to expectations, he finds mostly positive values for β in a set of MORB samples, averaging ∼+0.3 and reaching nearly to +1.0. (The nonlinear term in equation (18) vanishes as H → only if β < 0, so these fits cannot represent approach to saturation, as noted by Fabian . Note also that for β = +1 the nonlinear term in (18) is indistinguishable from the linear term.)
We find for a similar sample (Figure 12) that smoothing using moving window linear or quadratic fits generally fails to eliminate enough noise to allow accurate determination of the local second derivative in the field region where the approach to saturation takes place (Figure 12a). Above about 0.6 T, the loop appears to be fully closed and M″ does not appear to differ significantly from zero, taking on local positive and negative values due to random noise. However, a close inspection of M(H) over this field range appears to show a slight but consistent downward curvature, and the F tests conclusively rule out linearity. Further, a linear M(H) high-field fit yields a very low value for MS (∼1.6MR), strongly suggesting nonsaturation. The second derivative of the best fit hyperbolic function set changes sign several times in relatively low fields (Figure 12a). Each individual analytical basis function has its own points of maximum slope and maximum curvature, and when the weighted functions are summed to approximate the measured loop, the second derivative alternates between positive and negative values as the successive basis functions pass through these critical points. M″ (H) then decreases monotonically in fields above about 0.25 T and reaches values indistinguishable from zero by about 1.0 T (Figure 12a).
Equation (20) very significantly overestimates the consistent high-field curvature (Figure 12b) when local positive and negative values of M″ fail to average out due to the use of absolute values. For the data of Figure 12a, smoothing using a nine-point running average before differentiating gives a noisy but apparently linear logM″(logH) trend over the interval 0.1 < μ0H < 1.6 T, yielding β ∼ +0.95 (Figure 12b). Increasing the smoothing window width to 21 gives β ∼ +0.22, which is still clearly erroneously high. Whole-loop filtering might be expected to be better than running average smoothing for nonlinear high-field fitting by (20), but, unfortunately, we find that even this is not a robust way to determine α and β. In this example we obtain an estimate for β of −2.75 (Figure 12b), which is almost certainly erroneously low. The high-field curvature involved in the approach to saturation is so subtle that, even if a hyperbolic function model provides an excellent fit to M(H), it still may not accurately represent M″ (H). Overall, equation (20) yields estimates of β for this data set ranging all the way from −2.75 to +0.95, depending on exactly how we do the required smoothing. Relatively minor differences in the smoothed data lead to very large differences in estimates of β.
Bootstrapping 1000 least squares fits for the interval 1 < μ0H < 1.6 T (Figure 12c) gives a constrained mean estimate of β ∼ −1.3, which is still rather poorly defined: a large majority of the random samples gave fits with β at either end of the permitted range. When we repeat the procedure for the field range 0.6 < μ0H < 1.0 T, where the curvature (second derivative) is a bit more significant, we find β ∼ −1.6, similar to the value (−1.8) found by Fabian  using measurements in applied fields up to 7 T. Because we have constrained the range of permissible β values, it is not clear that the mean of the 1000 individual best fit βs accurately represents the central limit value of the distribution, but under the circumstances it is the best estimate that we are able to obtain. The definitive result of the bootstrap analysis is that the data are entirely consistent with β values spanning a wide range, from less than −2 to greater than −1, and we are unable to determine it with any greater precision than that from this data set.
The value of β is of interest because it is related to characteristics of the magnetic grains including dislocation density and internal stresses [Fabian, 2006]. The large uncertainty in its value is therefore unsatisfying. But even more problematic is the related uncertainty in the other fit parameters including MS, which cannot be properly calculated via equation (18) when the values of α and/or β calculated from (20) are positive. The bootstrapped least squares solutions (Figure 12c) give a constrained mean MS estimate that is ∼15% higher than that obtained by linear M(H) fitting, thereby reducing the apparent MR/MS ratio from 0.60 to 0.53. The range of solutions obtained by bootstrapping includes MS values greater and lower than the mean by about 10%–15% (Figure 12c), allowing a range of MR/MS ratios down to about 0.43.
The relatively large uncertainties in the parameters, as we have emphasized, is due to the fact that a wide variety of models fit the data almost equally well. Figure 12d compares the high-field measured moments M+(H) and Minv−(H) with values predicted by the best fit linear (M(H) = 0.162 + 0.190μ0H) and nonlinear (M(H) = 0.193 + 0.176μ0H − 0.0172(μ0H)−1.25) models. Both fits looks quite satisfactory, but the misfits for the nonlinear model are clearly smaller and “flatter” than for the linear model, which has maximum misfits near the center of the fitting range. Nevertheless it is important to ask whether the improvement in fit is significant. This can be assessed by the statistic
where pnl and plin are the number of parameters of the nonlinear and linear models (4 and 2, respectively), N is the number of measured values used in the fitting, and SSD,nl and SSD,lin are the summed squared residuals of the two models. High values indicate a statistically significant improvement of fit resulting from the addition of the nonlinear term; the critical value is given by an F distribution with (2, N – 4) degrees of freedom, approximately 3–3.5 for typical values of N. For this data set Fnl,lin = 76.6, and the nonlinear fit is very significantly better, even if the differences between the models appear to be slight. Unfortunately, however, we cannot use the same sort of test to choose among the various nonlinear models that provide excellent fits.
The best means of determining MS is, of course, to measure in fields strong enough to saturate the magnetization [e.g., Fabian, 2006]. High values of Fnl,lin show definitively that a model of the form (18) provides a better fit to the data and better estimates of MS and χHF than can be obtained from the linear M(H) approximation. However, even these improved estimates may be inaccurate if the range of applied fields is not fully in the approach-to-saturation region [Fabian, 2006]. In Figure 13 we compare hysteresis data measured to 1 T on a Princeton VSM with data measured to 5 T on a Quantum Designs MPMS, for a synthetic basalt sample [Bowles et al., 2009] (Figure 13a) and a sample of partially maghemitized pseudo-single-domain magnetite, the Wright 3006 [see, e.g., Yu et al., 2002] (Figure 13b). Although the 1 – T loops both appear saturated (linear M(H)) above 0.7 T (Figure 13, insets), the F tests decisively reject the hypothesis of saturation in both cases (FNL70% = 181; Fnl,lin = 1730 for the basalt; FNL70% = 273; Fnl,lin = 7540 for the magnetite powder). Slope-corrected loops calculated using the mean and range of the bootstrapped fits to (18) generally still underestimate MS for the basalt, but do reach values close to that from the 5 – T loop (Figure 13). This underestimation appears to indicate that the approach-to-saturation region does not extend down to the 0.7–1 T interval (i.e., that the magnetization does not follow (18) over the full range between 0.7 T and complete saturation in 3–5 T), in agreement with the conclusion of Fabian  for his set of MORB samples. For the magnetite powder, the MPMS data show that the slope dM/dH continues to change systematically in fields up to about 2.5 T, where saturation is finally reached; a linear M(H) fit over the 3–5 T range yields MS = 92.1 Am2/kg, essentially identical to the value expected for stoichiometric magnetite [e.g., Hunt et al., 1995]. However, some surface oxidation is indicated by a slight depression of the Verwey transition temperature observed in low-T remanence experiments (not shown) to about 108 K, and we attribute the incomplete saturation below 1 T to this partial oxidation. A linear M(H) fit to the 0.7–1 T data yields an MS estimate of 87.8 Am2/kg, nearly 5% lower than the value from the higher-field MPMS measurements. Extrapolation of the bootstrapped 0.7–1 T approach-to-saturation fits reproduces the slope-corrected MPMS measurements quite accurately (Figure 13b), with MS estimates ranging from 91.4 to 95.4 Am2/kg, and an average of 92.5.
Hysteresis measurements are essential tools in fundamental rock magnetism [Dunlop and Özdemir, 1997], and are widely used in environmental magnetism [Evans and Heller, 2003] and other geologically oriented applications. Our aim in this study has been to begin addressing some basic questions about the limitations on what we can determine from typical hysteresis measurements, and to explore some important but commonly unrecognized sources of error and how they affect the physical property values obtained by conventional processing. Our statistical and mathematical approaches are simple but significant first steps for calculating objective measures of data quality and for assessing the suitability of mathematical models of hysteresis. We hope that the community at large will take up the challenge of improving on our approach and expanding it ultimately to enable rigorous statistical estimates of the uncertainties in fundamental properties derived from hysteresis data.
A key problem is accurate determination of parameters describing the approach to saturation (MS, β, and χHF). This is important for its own sake [Fabian, 2006] but also because other loop parameters, particularly HC and Hih, can be strongly affected by the slope correction derived from high-field fitting, and therefore their uncertainties are compounded by those in χHF. Although we have not addressed that issue in this paper, it would be relatively straightforward in future work to include those critical field parameters into a bootstrap or Monte Carlo analysis to explore their sensitivity to changes in the high-field fitting, by recalculating the slope-corrected loop parameters for each resampled or noise-modified data set. A fully numerical treatment may in fact be an efficient and effective approach for estimating uncertainty in all derived curves and parameters, if a rigorous analytical approach proves intractable.
A fundamental principle in data processing can be stated as “Primum non nocere” or “First, do no harm.” This principle essentially dictates the order of steps in routine processing, to avoid degrading or distorting the M(H) signal by inappropriate treatments including forced symmetry and unnecessary filtering. The approach outlined in this paper assumes symmetry, but does not impose it, nor do we advocate symmetric averaging (as in the work by von Dobeneck ), even though for the vast majority of loops measured in our lab it would be beneficial. We begin with calculation of a center of symmetry, enabling bulk displacement of the loop without any change in its shape, other than some minimal and inevitable smoothing due to the required interpolation. Treating the upper and (inverted) lower loop branches as replicates enables us to quantify the statistics of deviations from perfect symmetry, due mainly to noise and drift but in some cases due to inherent asymmetry related to phenomena such as exchange bias. Of these three, drift is the only one that we recommend correcting routinely and automatically, because uncorrected drift errors have strong and detrimental affects on the subsequent tests for nonlinearity. Admittedly there is some danger here of altering an inherent asymmetry, especially in cases like that in Figure 3e where the origin of the drift is not well understood. The best recourse is of course to inspect processed data and raw data together, as in Figure 3, so that the effects of the processing can be evaluated and interpreted. And when a physically meaningful asymmetry is thought to be present, we recommend that (1) drift corrections not be applied and (2) replicate measurements of the entire loop should be made to enable future statistical analysis. Similarly we do not advocate automatically filtering for noise reduction (although it can be routinely applied as an analytical tool for obtaining the hyperbolic spectra [von Dobeneck, 1996]). The Q factor provides an objective criterion for deciding when filtering should be applied, and the lack-of-fit F test provides a quantitative means of determining when the results should be accepted. Similarly, high-field M(H) linearity can be assessed quantitatively to determine when approach-to-saturation fitting is necessary, and an F test can be used to evaluate whether the improvement in fit warrants the increase in model complexity.
9. Summary and Conclusions
A major goal of this paper is to open a discussion in the community at large by proposing some simple first steps in evaluating the quality of hysteresis data and in assessing uncertainties in fundamental properties like MS and χHF. Hysteresis-related quantities, and their dependence on temperature, orientation and other variables, are widely used in pure and applied rock magnetism, for studying underlying physical phenomena, or for characterizing magnetic mineralogy and grain size in order to gain insight into geological processes. In most cases, either the signal/noise ratio is clearly adequate and interpretations can be made reliably, or the data quality are so poor that no attempt is made to use them. However, there is always a gray zone between these limits, where it is particularly important to have quantitative, objective measures of data quality, and estimates of uncertainty in parameters derived from hysteresis loops. And even with high signal/noise ratios, there may be relatively large parameter uncertainties related to subtle mathematical aspects of the inverse problem. We contend that, traditional practice notwithstanding, it is unacceptable to omit quantitative estimates of data quality or parameter uncertainty from summary presentations of hysteresis results.
With important caveats, we can begin quantitative assessment of data quality for individual hysteresis loops by exploiting their symmetry. Hysteresis loop symmetry enables us to treat the upper and lower measured branches as replicates for quantifying noise (or other deviations from perfect symmetry), and to use ANOVA techniques to test for statistically significant departures from various models including whole-loop linearity, high-field linearity, and whole-loop superposition of sets of analytical basis functions. The approach is very similar to that used in evaluating anisotropy of susceptibility [Jelinek, 1976; Tauxe, 1998], where replicate symmetrically equivalent (antiparallel) measurements allow partitioning of fitting error into pure random measurement error and error due to model inadequacy (for the null hypothesis of isotropy). When the lack of fit outweighs the pure error, the model must be rejected, and the anisotropy can be considered to be statistically significant. As long as the assumption of symmetrically equivalent upper and lower hysteresis loop branches is valid, we can formulate F tests for significance of nonlinearity of the whole loop or of the high-field portions, and tests for significance of fitting errors in loop models using analytical basis functions.
We recommend that some measures of data quality and/or parameter uncertainty be included in the MagIC database and in tabular or graphical (e g., Day plot) presentations of hysteresis data in future publications. Simplest is the quality parameter Q, based on the mean square mismatch between symmetrically equivalent moments. F statistics for high-field nonlinearity and for improvement of fit due to addition of nonlinear terms are also relatively straightforward. These statistics provide objective and quantitative tools for assessment of overall data quality and of the degree of saturation attained, which are conventionally evaluated mainly by subjective and qualitative means. We further recommend that, to the greatest extent possible, raw loop (and FORC [e.g., Roberts et al., 2000]) data should be preserved in MagIC to document the processed summary data shown in publications.
Necessary prerequisites for the application of these tests include correction for loop offsets and drift. On the assumption of inherent symmetry we have devised a robust method for quantifying and removing loop shifts due to factors including sensor offsets, very hard field-cooled remanences, and exchange bias (although the latter two phenomena may also result in significant departures from symmetry). Drift corrections are somewhat more problematic, and differing approaches are required depending on the causal mechanism. An effect that can be considered as a particular type of instrument drift is saturation of the electromagnet pole pieces and the resultant field-dependent changes in magnetic moment calibration. Failure to correct for this effect leads to exaggerated estimates of high-field loop curvature and nonsaturation of the ferromagnetic carriers in the sample.
Even when all of these possible data problems are recognized and corrected, the estimation of parameters describing high-field magnetic behavior is accompanied by much larger uncertainties than are commonly recognized. The fitting problem is mathematically ill conditioned, involving strongly intercorrelated basis functions, whose partial interchangeability reduces our ability to determine unique best fit parameter values from experimental data sets that inevitably contain at least some random noise. MS and χHF standard errors may be quantified through standard linear regression techniques, but in many cases the uncertainty due to incomplete saturation is much larger. Inclusion of nonlinear terms in high-field M(H) fitting results in an increase, rather than a decrease, in the statistical estimates of parameter uncertainty. The best way that we have found to characterize the range of acceptable solutions is by bootstrap analysis, involving repeated application of nonlinear least squares methods on randomly drawn samples of data from the selected high-field range. We expect that Monte Carlo tests based on adding random noise to the data (as done, e.g., by von Dobeneck ) would yield similar results.
In the MORB case that we illustrate, with a relatively high-quality data set and with good reason to suspect significant nonlinearity in the high-field M(H), we find that the error bars for the exponent β exceed the range that we allowed as “reasonable,” and therefore that β is essentially unknowable with precision greater than that (at least in the range of fields that we analyzed). This, in turn, may prevent accurate estimation of MS and χHF and their uncertainties without further measurements in stronger fields, as concluded by Fabian , and as shown by our synthetic basalt data. These parameters can be accurately determined by nonlinear M(H) fitting only when the field range of the measurements extends well into the approach-to-saturation region, as for our oxidized magnetite powder.
We thank Subir Banerjee for inspiration, support, and the opportunity to pursue research in a wide range of interesting areas. This paper was significantly improved by the comments and suggestions of Julie Bowles, who also contributed one of the data sets. We gratefully acknowledge the perceptive and constructive critiques of Karl Fabian, Pavel Doubrovine, Lisa Tauxe, and an anonymous reviewer, as well as the Editor, John Tarduno. All of them contributed important ideas for improvement. This is publication 0910 of the Institute for Rock Magnetism, which is supported by grants from the Instruments and Facilities Program, Division of Earth Science, National Science Foundation.