## INTRODUCTION

BONE MINERAL DENSITY (BMD) is the primary determinant of skeletal fragility, and, as such, plays a central role in the diagnosis of osteoporosis. It remains, however, somewhat difficult for clinicians to use BMD measurements as readily as would be desirable. There are a number of reasons for these difficulties, but primary among them is the systematic difference in reported BMD among the manufacturers of densitometers. While the reasons for the discrepancies are many, the goal of this paper is not to discuss the biological or technical contributors to the problem,1–3 but rather to introduce an appropriate algorithm for converting measurements from different machines to a universal standard scale whereby the measurements on the same subject on different machines are comparable.

The first attempt at universal standardization of BMD was made on dual-photon X-ray absorptiometry (DXA) measurements. The International DXA Standardization Committee (IDSC) sponsored a cross-calibration study which measured 100 women on three DXA scanners made by three different manufacturers.1^{,}4 The data showed that the measurements on the three machines were highly correlated and linearly related to one another; hence, simple linear regression equations were derived for converting BMD measurements on any one machine to another. To avoid designating any of the machines as the “gold standard,” the IDSC study then went on to derive a universal standardized measurement called standardized bone mineral density (sBMD). The aim was to convert each manufacturer's BMD to sBMD using a formula such that the sBMD would give “approximately the same value when scanning one patient on all machines” and to “peg” the values to the “true” density of a reference phantom.1 Since no standard statistical procedure was readily available for deriving the universal standard, the investigators developed an ad hoc method, which, unfortunately, had several problems. In particular, systematic differences remained between the same patient's sBMD on different machines. In this paper, we evaluate the extent of this bias in the original cross-calibration data and go on to show that the problem may be negligible for standardizing measurements made on machines other than those used in the cross-calibration study.

The IDSC conversion equations from spine BMD to sBMD have now been implemented in new DXA scanners.5 The machine-generated sBMDs are intended primarily for clinical use worldwide. These conversions, no matter how good, were optimized only for the three specific machines used in the original cross-calibration study. Although we will show that clinical application of the IDSC conversions are appropriate, researchers who wish to standardize multiple machines in their own laboratories for research studies should derive their own conversions that are optimized for their own machines. To this end, we propose a new conversion algorithm that improves upon the IDSC algorithm by minimizing the differences in sBMD on the same subjects and removing all residual biases.

Our proposed algorithm is to be used only *after* standard regression analysis has established linearity between BMD measured on all possible pairs of machines, as was done in the IDSC study. The major steps of the proposed algorithm are: (1) subtract the mean BMD from the individual BMD measured on each machine; (2) multiply the mean-adjusted BMD by a factor specific to each machine such that the total squared difference among machines is minimized; (3) add a common constant to each multiple of mean-adjusted BMD to obtain sBMD such that the mean sBMD of the “pegging” phantom from all machines equals its theoretical “true” density.

We will compare the performance of the different algorithms by applying them to the data from the original IDSC study and to an external data set.