SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Disclosure of Conflict of Interests
  4. References

See also Shermock KM, Streiff MB, Pinto BL, Kraus P, Pronovost PJ. Novel analysis of clinically relevant diagnostic errors in point-of-care devices. J Thromb Haemost 2011; 9: 1769–75; and Holland L. Novel analysis of clinically relevant diagnostic errors in point-of-care devices: a rebuttal. This issue, pp 321–2.

We thank Dr Holland for his interest in our work and his thought-provoking comments [1]. It is imperative for analysts to prioritize the needs of end-users (i.e. the clinicians and their patients). In line with clinicians’ expectations, Shermock’s method [2,3] aspires to the top of ‘The Stockholm Conference Hierarchy’ by framing the assessment of analytic performance by the effect on clinical decision-making [4–7]. Therefore, although we agree that our novel method of analysis is not ‘complete’, it should become a key part of the analytic strategy to assess the quality of clinical measurements, because its frame and interpretation are aligned with clinical needs and expectations.

Our method led to the conclusion that something was amiss with our point-of-care (POC) devices, a conclusion that Dr Holland agrees with, whereas traditional analysis did not [3]. One of our chief concerns with traditional methods of analytic performance is that they appear to be designed both by and for clinical chemists, not clinicians. We do not intend to discredit traditional analytic performance assessments, which typically focus on linear regression, correlation, and average bias. However, our point is that the clinical meaning of these parameters is not self-evident, and it is unclear how to coax clinical meaning out of them. Too often, this leads to the unfortunately easy step of taking the correlation coefficient to be the single criterion by which POC devices are judged. Our method reports explicitly clinical parameters and is therefore more relevant and easier to interpret for end-users.

In our case, it was simultaneously true that a POC device had a correlation of 0.9 with our clinical laboratory (good!), never reported seven commonly encountered International Normalized Ratio (INR) values (what!?), and led to the wrong clinical decision an estimated 30% of the time (bad!). Only our method provided clinicians with directly meaningful information about the performance of the devices that they were using to dose potentially lethal medications. Armed with this new information, the clinicians emphatically revolted.

We acknowledge that both POC and clinical laboratory measures are associated with imprecision. This imprecision will result in different estimates of agreement between the POC and the laboratory upon repeated assessments. However, the normal distribution of random error in both measures means that agreement status changes would occur in both directions (i.e. from agreement to disagreement and vice versa) and largely cancel each other out upon repeated assessments. We are formally assessing the impact of random analytic error empirically and through a modeling exercise, and look forward to reporting our findings. We stand by our assessment of clinical disagreement as being the best possible estimate that can be derived from these data.

The example presented in which a sample INR is (magically) known to be 1.8 is not directly analogous to our study, where the true measures of both values are unknown. However, if, somehow, an INR value is known to be 1.8, then reported results greater than 1.8 should absolutely be regarded as diagnostic error, because such measures would probably lead to the wrong clinical decision. To excuse a device on the basis of its own inherent variability/imprecision seems circular, and is certainly not consistent with how clinicians think or what they want to know about the performance of their POC devices.

Of course, the true value of the INR is never known in real practice. The laboratory measure is commonly regarded as the clinically accepted standard measure. Often, this is reflected in institutional policies, which call for suspect INR values to be verified by the laboratory. Even Dr Holland acknowledges this when he suggests that ‘extreme values and those near critical decision points be confirmed by laboratory analysis’. As the laboratory measure is considered to be an accepted standard, we believe that it is important to know the extent to which a surrogate measure will lead to a different decision.

Dr Holland suggests that, perhaps, the value in POC INR measurement may ‘lie in distinguishing between patients who likely need no therapeutic change (laboratory INR will be 1.9–3.3) vs. those who likely do (laboratory INR will be < 1.9 or > 3.3)’. This is exactly the frame of our analysis, and we demonstrated that the POC INR device in our clinics was poor at making this distinction. Among the patients for whom the laboratory INR was < 1.9, over 50% were misclassified as needing no therapeutic change [3]. This is the main point of our analysis, and it is largely why the POC device in question is no longer used in our clinics.

Disclosure of Conflict of Interests

  1. Top of page
  2. Abstract
  3. Disclosure of Conflict of Interests
  4. References

The authors state that they have no conflict of interest.

References

  1. Top of page
  2. Abstract
  3. Disclosure of Conflict of Interests
  4. References