Second-Order Calibration and Higher
Published Online: 15 SEP 2006
Copyright © 2000 John Wiley & Sons, Ltd. All rights reserved.
Encyclopedia of Analytical Chemistry
How to Cite
Fleming, C. M. and Kowalski, B. R. 2006. Second-Order Calibration and Higher. Encyclopedia of Analytical Chemistry. .
- Published Online: 15 SEP 2006
Most analytical chemists recognize calibration as the quantitation of analytes by relating the analyte signal and analyte concentration, although it is also possible to obtain information about other characteristics of the sample through calibration. In the calibration process, a number of samples that have known values of the characteristic to be determined, e.g. concentration, are measured. These are reference measurements, and are known as calibration standards. Traditionally, the measurements have consisted of a single number (a univariate or scalar measurement); they are plotted against the known concentration, to generate a “calibration curve”. (An example of a scalar measurement is a pH value or the area under the curve of a chromatographic peak.) When unknown (test) samples are measured under the same experimental conditions as the standards, their concentrations may be found by regressing their measured signals against the standard calibration curve. However, a significant disadvantage of univariate calibration is that a sensor must be fully selective for the analyte of interest, since any interfering species that may be present cannot be detected when measurements are scalar. This has led to the development of “first-order” calibration methods. These techniques use the relationship between an array of measurements (such as a chromatogram or spectrum) and the analyte concentration to develop a calibration model from which analyte concentrations in test samples can be determined. The array of measurements is a vector, and is known as a first-order tensor, while the calibration model is equivalent to the calibration curve. Since first-order calibration uses many measurements per sample to generate a model, rather than just a single measurement as in the univariate case, it has a number of advantages. For example, multianalyte analysis is made possible as long as there is a standard available for each analyte present in a sample, and outlier detection is also feasible. These types of methods have become popular in recent years, and include partial least squares (PLS) and principal component regression (PCR). It makes sense that when more data are available per sample, more information may be extracted, and this is the source of the success of first-order methods. It therefore follows that, if an analyte signal consists of a matrix of data (also known as a second-order tensor), even more advantages exist. [Examples of instruments that produce matrices of data for each sample analyzed include liquid chromatography/ultraviolet detection (LC/UV) and gas chromatography/mass spectrometry (GC/MS).] This situation allows powerful calibration techniques, known as second-order methods, to be used. The primary advantage of these methods is known as the “second-order advantage”; it allows analytes to be quantitated even in the presence of unmodeled interferents. In other words, the calibration standards do not have to contain any information about the interfering species in the sample. This is of particular value in situations where complex samples are being analyzed, as they do not have to be fully characterized for quantitation of a single analyte. The disadvantage of these methods, however, is their complexity: there are often decisions to be made, such as the values to assign to various parameters, and each method has analysis situations which are more favorable than others. The most common second-order analysis methods are alternating least-squares (ALS) methods, such as parallel factor analysis (PARAFAC) and multivariate curve resolution (MCR), and eigenvalue–eigenvector based methods, such as the generalized rank annihilation method (GRAM) and direct trilinear decomposition (DTD).