By continuing to browse this site you agree to us using cookies as described in About Cookies
Notice: Wiley Online Library will be unavailable on Saturday 7th Oct from 03.00 EDT / 08:00 BST / 12:30 IST / 15.00 SGT to 08.00 EDT / 13.00 BST / 17:30 IST / 20.00 SGT and Sunday 8th Oct from 03.00 EDT / 08:00 BST / 12:30 IST / 15.00 SGT to 06.00 EDT / 11.00 BST / 15:30 IST / 18.00 SGT for essential maintenance. Apologies for the inconvenience.
It is the capacity to generate thrombin, and the enzymatic work that thrombin does, that determines blood coagulability. Therefore, measurement of the enzymatic work potential of thrombin provides a method for quantifying the composite effect of the multiple factors that determine coagulation capacity. The application of measurement of thrombin generation to clinical decision making has been hampered by numerous technical difficulties and pitfalls, many of which have now been overcome. Technical advances now permit rapid, reproducible measurement. A review of clinical studies performed to date indicates the need to appreciate the precise methodology used in each case and the need to consider standardisation in future studies. Applying thrombin generation measurement to clinical decision making will require up-to-date estimates of risks relating to the disorder and the intended therapeutic intervention. Ultimately management studies will be required if the clinical utility of measurement of thrombin generation is to be proven.
The laboratory analysis of coagulation has developed along a traditional scientific reductionist path, in the sense that blood coagulation is understood by dissection of the coagulation system into its individual components. The translation of this scientific method to clinical medicine has resulted in a practice whereby, depending on the clinical context, the amount and activity of a few potentially relevant factors (procoagulant or anticoagulant) are measured and from these measurements a prediction of coagulation potential is made. It might be argued that the clinical laboratory has come as far as it can with this reductionist approach; all that can be achieved is increasing accuracy and decreasing imprecision. A different approach is required if the clinical utility of the coagulation laboratory is to be significantly advanced; clinical utility being the likelihood that a test result will lead to an improved health outcome. A more direct global measurement of coagulation potential might be such an advance. Whilst progress in molecular biology allows an ever increasing number of genetic factors that influence haemostasis to be identified, the measurement of the coagulation phenotype, rather than the genotype, might provide a more clinically relevant laboratory assessment (Hemker & Beguin, 2000; Brummel-Ziedins et al, 2004; Mann et al, 2004; Regnault et al, 2004).
Why measure thrombin?
It is the localisation and rapid amplification of proteases responsible for thrombin generation at sites of vascular injury, and protease-inhibition elsewhere in the circulation, that limits bleeding and prevents thrombosis. It is the capacity to generate thrombin, and the enzymatic work that thrombin does, that determines blood coagulability. Therefore, measurement of thrombin generation, and in particular the enzymatic work potential of thrombin, provides a method for quantifying the composite effect of the multiple factors that determine coagulation capacity and the influence of the environment on these factors. It is an acknowledgement of the importance of the ‘work potential’ of thrombin that Hemker named his iteration of a thrombin generation assay the endogenous thrombin potential (ETP) (Hemker et al, 1993).
From a clinical perspective it would be extremely useful to have a laboratory test that accurately predicts an individual's coagulation potential. Such a test ideally should detect prothrombotic states and correlate with clinical outcomes, correlate with clinically observed bleeding in congenital and acquired states of reduced haemostatic capacity and indicate quantitatively the effect of drugs on this capacity (procoagulant and anticoagulant), regardless of the class of drug, i.e. whether it be a heparin derivative or a direct thrombin inhibitor, a vitamin K antagonist or an antiplatelet drug (Hemker & Beguin, 2000; Mannucci, 2002; Mann et al, 2003).
Normal haemostasis is characterised by a dynamic equilibrium between the procoagulant and anticoagulant components of the coagulation network. This equilibrium, and the potential of this equilibrium to shift in response to a stimulus, is currently assessed by measuring one, or sometimes a few factors, and then predicting what the effect of those factor levels or activities will be on the potential of the whole system to generate thrombin. Furthermore, a dichotomous testing strategy is often employed, whereby it is simply determined if a factor is either normal or abnormal, for example whether the factor V Leiden mutation is present or absent. It seems improbable that a dichotomous testing strategy will predict clinical outcome with any useful accuracy when one considers the multiple genetic factors influencing thrombin generation and the effect of gene–environment interaction. In fact, a low clinical utility for testing for factor V Leiden has now been proven; testing does not predict likelihood of recurrence of venous thrombosis (Rintelen et al, 1996; Eichinger et al, 1997) and there is a low absolute predictive value for thrombosis in asymptomatic carriers, such that case-finding by family study is not considered to be generally useful (Middeldorp et al, 2001; Simioni et al, 2002).
Different individuals have different thrombin generating potentials and hence different coagulation phenotypes. It is likely that an individual's phenotype changes with time and circumstance, for example with advancing age, pregnancy, use of oestrogen-containing hormonal preparations and drug therapy. Consequently, it is likely that error will result from attempts to predict the phenotype from ‘bit-part measurement and speculative reconstruction’. Direct measurement of the thrombin potential is required if the relationship between an individual's ability to generate thrombin and their coagulation phenotype is to be explored.
How is thrombin generated?
The classical view of blood coagulation with separate extrinsic and intrinsic pathways initiated by either tissue factor or contact with an anionic surface does not model physiological coagulation. It is now appreciated that coagulation does not occur as a consequence of linear sequential enzyme activation pathways but rather via a network of simultaneous interactions with regulation and modulation of these interactions during the thrombin generation process itself. The demonstration of activation of factor IX by tissue factor and activated factor VII (factor VIIa) (Josso & Prou-Wartelle, 1965; Osterud & Rapaport, 1977) and factor XI by thrombin in a factor XII-independent manner (Gailani & Broze, 1991) has led to the current model in which blood coagulation is initiated by transient exposure of tissue factor by damaged endothelium, resulting in subnanomolar amounts of thrombin (Brummel et al, 2002). Furthermore, generation of thrombin does not occur in the fluid phase but on phospholipid surfaces. Thrombin is the product of an enzyme amplification network in which the inactive zymogen forms of proteases and cofactors are activated by proteolytic cleavage. Thrombin amplifies its own production by activating the cofactors (factors V and VIII) and also activating cells that provide the phospholipid surface required for assembly of macromolecular enzymatic complexes (tenase and prothrombinase complexes) (Bevers et al, 1991). In vivo, the initiation pathway of the network generates nanomolar concentrations of thrombin via factor VIIa-driven factor Xa formation (extrinsic-tenase). This initial thrombin activity is necessary to prime the system for a full thrombin explosion. This pathway is rapidly shut down by Tissue Factor Pathway Inhibitor (TFPI) (Rao & Rapaport, 1987; Broze et al, 1988) and the full thrombin explosion is then dependent on factors IXa- and XIa-driven factor Xa formation (intrinsic tenase) (Hoffman et al, 1995; Hoffman & Monroe, 2001; Brummel et al, 2002).
Using a whole blood thrombin generation assay it can be shown that, after initiation of thrombin generation by a low concentration of Tissue Factor (TF), there is a lag time to thrombin generation of at least 4 min and then an exponential rise in thrombin concentration as the thrombin explosion develops (Rand et al, 1996). Conversion of fibrinogen to fibrin and formation of clot occurs when less than 5% of the total thrombin has been generated. Clot formation is the endpoint in standard clot-based assays, such as the prothrombin time (PT) and activated partial thromboplastin time (APTT), indicating that the PT and the APTT are influenced by less than 5% of the thrombin generated in a sample. Using a synthetic model of coagulation Mann and colleagues have illustrated how blood coagulation is dependent not only on the components of the haemostatic network but equally the connectivity between the components and the dynamics of these connections (Butenas et al, 1999; Brummel et al, 2002). Only extreme changes in levels of individual factors will affect clot-based assays, but even modest changes significantly affect the thrombin-generating capacity to an extent that might be expected to have clinical consequences. They have suggested that there is likely to be a distribution of thrombin generating capacity in the population that is clinically relevant without this being appreciable in conventional clotting tests.
How can thrombin generation be measured?
There is a fundamental difference between the measurement of the capacity of the coagulation network to generate thrombin in-vitro in a test tube as compared with identifying thrombin generation that has taken place in-vivo. Measuring the capacity to generate thrombin in a test tube in response to a pre-determined activation stimulus is quantitative and will indicate an individual's thrombin generating potential and, possibly, their likelihood of developing either spontaneous or provoked bleeding or thrombosis. By comparison, measurements to identify in-vivo thrombin generation, for example fibrin-degradation products, such as d-dimer, activation peptides, such as prothrombin fragment F1.2, or enzyme-inhibitor complexes, such as thrombin-antithrombin, are influenced by factors other than thrombin generation, such as product clearance times and fibrinolytic activity (Boisclair et al, 1990). These measurements are unlikely to accurately resolve individual degrees of coagulability with any degree of accuracy. Even in patients with a prothrombotic tendency there will be prolonged periods, probably the majority of the time, when hypercoagulablity is a potential rather than an actual event (Lee et al, 1994). Furthermore, intermittent increased thrombin generation is a normal physiological event. The absence of hypercoagulability, as determined by d-dimer measurement, following cessation of oral anticoagulant therapy after a first episode of venous thromboembolism may have a potentially useful negative predictive value (Palareti et al, 2002, 2003; Eichinger et al, 2003; Le Gal & Bounameaux, 2004) and this is now being assessed in a management study (PROLONG). However, the presence of hypercoagulability determined from d-dimer measurement has a low positive predictive value and so it is still not currently possible to target long-term anticoagulant therapy at those patients in whom it would be most beneficial. This may relate to the difficulty of distinguishing a pathological, as opposed to a physiological, thrombin response at any particular time point, and consequently a high false-positive rate. In contrast, measurement of the thrombin generating potential offers the opportunity to identify potential pathological hypercoagulability at any time-point. As the method is quantitative the normal response to a pre-determined activation stimulus can be determined and hence a pathological response defined, reducing the likelihood of a false positive result. It remains to be determined if biomarkers of thrombin generation and measurement of thrombin generating potential will be complimentary. Clinical utility may derive from combined measurement.
Measuring thrombin generation in the clinical laboratory
The original concept of a thrombin generation test was described by two groups in 1953, one at the Radcliffe Infirmary in Oxford (Macfarlane & Biggs, 1953) and one at the Hammersmith Hospital, London (Pitney & Dacie, 1953). In both assays thrombin generation was triggered in the primary reaction tube, containing plasma with or without platelets, and subsampling at regular intervals into secondary indicator tubes containing a fibrinogen solution. The clotting times of the fibrinogen solution were used to estimate thrombin activity after calibration against a thrombin solution of known concentration. This was a two stage assay, which was difficult to perform and imprecise. Clot formation in the primary reaction tube necessitated removing the clot with a wooden stick, which can result in volume imprecision because of obstruction of the pipette tip with fragments of clot and imprecision of the subsampling time because of the need to remove the clot during the procedure. Substrate exhaustion at high thrombin concentration and difficulty of end-point detection (clot formation) at low thrombin concentration are further major sources of error when using a fibrinogen solution to register thrombin activity.
In the mid-1980's Hemker and colleagues revived the application of the thrombin generation assay by introducing several modifications that made the assay easier to perform and reduced imprecision (Hemker et al, 1986). The fibrinogen solution was replaced by a chromogenic substrate and the primary plasma sample was defibrinated. This reduced error because of substrate exhaustion and endpoint detection and error related to timing and volume of subsampling. The addition of a time-recording pipette linked to computerised data capture further minimised imprecision related to sampling time. However, the method was still a two-stage assay that was difficult to perform without specialised equipment and the calculation of thrombin activity was complicated by the use of a chromogenic substrate, which gives an erroneous measurement of the thrombin-decay process (Hemker et al, 1986). Thrombin is neutralised predominantly by antithrombin and α2-macroglobulin (Fischer et al, 1981). Antithrombin is an active site inhibitor whereas α2-macroglobulin neutralises thrombin activity via exosite-interaction which prevents substrate-association. Chromogenic substrates used to detect thrombin activity are small molecules that access the active site of thrombin bound to α2-macroglobulin. Consequently, there is continued cleavage of substrate by α2-macroglobulin-complexed thrombin. In contrast, active-site inhibited thrombin is incapable of cleaving the chromogenic substrate (Fig 1). Thus the thrombin generation curve derived from a typical small synthetic substrate is a composite of the thrombin-time integrals produced by free thrombin and the complex of thrombin with α2-macroglobulin (Fig 2). In order to calculate the area under the free thrombin curve, which is the physiological enzymatic work potential, an algorithm was used to derive the free thrombin-time integral from the observed thrombin generation curve. This enabled calculation of the area under the free thrombin–time curve which Hemker designated the ETP (Hemker et al, 1986).
A further development of measurement of the ETP was the use of a slow reacting substrate, which permitted continuous registration of thrombin activity in the primary reaction tube. This converted the method to a much simpler one-stage assay (Hemker et al, 1993). The method still required defibrination of the plasma sample and it is now realised that removal of fibrinogen has a profound effect on the thrombin generation curve with a lower thrombin peak and a higher end-signal from the complex of thrombin with α2-macroglobulin (Hemker et al, 2003). Thrombin generation measured in defibrinated plasma with a chromogenic substrate is up to 50% higher than in plasma from which fibrinogen/fibrin has not been removed.
The replacement of the chromogenic substrate with a slow reacting fluorogenic substrate enabled continuous measurement of thrombin generation without the need for defibrination, as the signal from the fluorophore is not quenched by turbidity. Consequently, it is also possible to measure thrombin generation in platelet rich plasma and to determine the effect of other cellular elements. In contrast to the chromogenic method, in fluorescence experiments, significantly more thrombin activity is found in the presence of fibrin than in its absence (Hemker et al, 2003). This is probably because of the fact that, as the sample is not defibrinated, fibrin-bound thrombin is measured as well as free thrombin. When fibrinogen is added back to the defibrinated sample in the chromogenic method the ETP actually reduces. This is because of the increasing turbidity, which cancels out the signal from a chromophore, including that related to fibrin-bound thrombin. Cancellation of the signal does not occur with a fluorophore. The compartmentalisation of thrombin, between fibrin-bound and fluid-phase, is significant as fibrin-bound thrombin is protected from inhibition by antithrombin (Weitz et al, 1990) and this may be particularly important both in the initiation of coagulation and in priming the network in preparation for the full thrombin explosion (Kumar et al, 1994). The role of fibrinogen and the effect of fibrin polymerisation and the potential influence of fibrinogen concentration on prothrombotic tendency (Marchetti et al, 2003) emphasise the potential importance of measuring thrombin generation in the presence of fibrinogen. A synergistic role between fibrinogen/fibrin and platelets can be demonstrated when thrombin generation is platelet-dependent (Kumar et al, 1995; Beguin et al, 1999).
A new problem encountered with a fluorogenic substrate is the absence of a direct linear relation between thrombin activity and the fluorescent signal. This is because of substrate consumption and the non-linearity of fluorescence intensity with increasing concentration of fluorescent molecules (the ‘inner filter effect’). In calibrated automated thrombography (CAT) this problem is overcome by monitoring the splitting of a fluorogenic substrate and comparing it to a constant known thrombin activity in a parallel non-clotting sample (Hemker et al, 2003). CAT provides a reproducible method that could be incorporated into clinical laboratory practice.
Mann and colleagues have combined numerical models, a synthetic plasma-platelet model and a minimally altered whole blood assay to study the heterogeneity of the coagulation phenotype on an individual basis (Mann et al, 2004). Thrombin generation is measured in whole blood with a two stage assay with enzyme-linked immunosorbent assay (ELISA) measurement of thrombin-antithrombin (T-AT) (Rand et al, 1996). The method does not require anticoagulation of the sample with sodium citrate and recalcification is not required. Anticoagulation of the sample is achieved by contact factor inhibition with corn trypsin inhibitor (CTI). The original technique would not be applicable to clinical laboratory practice but the method has been modified to a one-point measurement (Brummel-Ziedins et al, 2004). Thrombin is measured not only in the presence of fibrinogen but in whole blood.
Defining the thrombin potential
The stimulus-response coupling of thrombin generation predetermines the influence of the coagulation network on the total amount of thrombin generated. In the PT a very high tissue factor concentration (>0·2 μmol/l) is used, which overwhelms the TFPI natural anticoagulant mechanism. Consequently, almost all prothrombin is converted into thrombin directly by extrinsic tenase (TF–VIIa), within 15 s for a normal sample. If the tissue factor concentration is reduced by four to five orders of magnitude (equivalent to about a 50 000-fold dilution of a typical thromboplastin reagent), increasingly the various components of the coagulation network and the connectivity between these components and the dynamics of the connections determine how much and how fast thrombin is generated. Thus a very dilute tissue factor-triggered thrombin generation assay is sensitive to the procoagulant compartment of the thrombin-generating network. How much tissue factor should be used is a question that will have to be determined from translational studies (see Clinical utility). If connectivity and dynamics of the network are to be measured it is necessary to reduce the TF to a concentration that confers dependence on phospholipid-dependent tenase activity (intrinsic tenase). In the one-point whole blood ELISA assay, the TF concentration used in the original description was 5 pmol/l (Brummel-Ziedins et al, 2004). In the CAT assay typical concentrations of TF are 6 pmol/l for platelet poor plasma and 3 pmol/l for platelet rich plasma (Hemker et al, 2003). When using such low TF concentrations it is necessary to relipidate TF with an excess of phospholipid or to add additional phospholipids to the reaction so that the phospholipid concentration does not become rate-limiting. When using platelet rich plasma additional phospholipid can be deliberately omitted to make thrombin generation platelet phospholipid-dependent. An additional potential problem with low TF-triggered thrombin generation assays is error, because of contact factor activation. When the TF trigger concentration is below 15 pmol/l, factor XIIa-driven thrombin generation can equal or exceed that due to TF. As contact factor activation is an unpredictable pre-analytical variable, this produces inaccuracy and imprecision (Luddington & Baglin, 2004). This problem is avoided in the whole blood ELISA method, as factor XIIa is neutralised by taking the blood into samples containing CTI (Rand et al, 1996; Brummel-Ziedins et al, 2004). Similarly, the problem can be avoided in the CAT assay by taking samples directly into tubes containing CTI (Luddington & Baglin, 2004).
Thrombin is both a procoagulant and, via its interaction with thrombomodulin, an anticoagulant, a phenomenon which has been described as the ‘thrombin paradox’ (Griffin, 1995). If the natural anticoagulant property of thrombin is to be captured then a source of thrombomodulin in the reaction tube is required for protein C activation. How much thrombomodulin is required, if any, to produce an assay with clinical utility is again a question that will have to be determined from translational studies (see Clinical utility). Similarly, it will have to be determined if the presence of endothelial protein C receptor or glycosaminoglycans improve the clinical predictive value of an assay.
Normal values and variability
Imprecision with the ETP measurement is less with non-defibrinated plasma (flurogenic assay) than with defibrinated samples (chromogenic assay) (Hemker et al, 2003). The intra-assay coefficient of variation (CV) with CAT (flurogenic) is less than 5%. Data are required on the inter-assay CV but this is likely to be less than 10% as the intra-individual CV from nine consecutive weekly measurements in four individuals was 6–11% (Hemker et al, 2003). The interindividual variability determined from these measurements was 17·5%. These results suggest that in an individual the ETP is relatively constant, at least over a time period of 9 weeks, but that there are significant differences in ETP between ‘normal’ individuals. Before the clinical utility of ETP measurement by CAT is investigated it will be necessary to determine the intra- and inter-assay CVs within and between laboratories and to determine the individual biological variability of the assay over a more prolonged period of time. It remains to be determined whether the inter-laboratory imprecision will be less with CAT than with the chromogenic-based assay and whether standardisation can achieve acceptable levels of imprecision, as has been suggested for chromogenic methods (Lawrie et al, 2003).
A relatively stable thrombin potential in healthy subjects, but with significant interindividual differences, is also supported by the one-point ELISA method developed by Mann and colleagues (Brummel-Ziedins et al, 2004). Over a 6-month time period, 13 healthy male subjects had blood samples taken at approximately monthly intervals. The assay had an analytical variance of 7% and the individual variance between the monthly samples was 11·6%, indicating there was little variation in the thrombin generating potential in subjects. This study also demonstrated significant differences between ‘normal’ individuals with an inter-individual CV of 25% (Brummel-Ziedins et al, 2004).
Proof of principle
The one-point whole blood T-AT assay was only reported in 2004 (Brummel-Ziedins et al, 2004) and studies utilising this technology by clinical laboratories other than the index laboratory are yet to be reported. Initial studies with a chromogenic ETP assay suggested that hypercoagulability and hypocoagulability can be detected (Wielders et al, 1997). However, the validity of the chromogenic assay is questionable, given the absence of fibrinogen and the effect of this on thrombin measurement. Studies using CAT have now demonstrated that thrombin generation is significantly elevated in some patients with thrombophilia (Chantarangkul et al, 2003; Andresen et al, 2004; Luddington & Baglin, 2004; Regnault et al, 2004) and is reduced and appears to correlate to some degree with severity of factor deficiency in patients with haemophilia (Luddington & Baglin, 2004; Dargaud et al, 2005). Only one study has compared CAT with and without contact factor inhibition and significant differences (up to twofold) in thrombin generation were evident at TF concentrations of 5 pM and less (Luddington & Baglin, 2004).
A review of clinical studies performed to date indicates the need to appreciate the precise methodology used in each case and the need to consider standardisation in future studies. Thrombin generation results depend on the concentration of tissue factor used and whether anticoagulant pathways are activated, for example with thrombomodulin or glycosaminoglycans. Results with samples taken into CTI are lower than when contact factor inhibition is not employed. The phospholipid concentration must be chosen depending on whether or not phospholipids are to be rate limiting. If a thrombin calibrator is not used results can only be expressed in fluoresecent units and such measurements cannot be directly compared with those from CAT, as each fluorescence level requires a different calibration factor to convert fluorescence units to molar concentration of thrombin (see above).
Turecek et al (2003) used a flurophore without calibration to access the effect of adding FEIBATM or recombinant VIIa (rVIIa) to plasma samples from haemophilia patients with factor VIII inhibitors. A dose-dependent increase in thrombin generation was observed with mean normal thrombin generation achieved at a FEIBATM concentration of 1 U/ml and a VIIa concentration of 150 μg/ml. The same group performed a further study in which thrombin generation was measured in samples taken from three patients after FEIBATM administration (Varadi et al, 2003). The rate and peak thrombin generation were dose-dependent. The thrombin generating capacity decayed with a half-life of 4–7 h. The concentration of TF used in these studies was 2 pmol/l. Siegemund et al (2003) used a fluorophore without calibration and a diluted thromboplastin reagent to study the effect of platelets in patients with haemophilia (Siegemund et al, 2003). There was a linear relationship between thrombin generation and platelet count up to 100 × 109/l and the pattern of thrombin generation appeared to be different in patients with factor VIII deficiency compared with those with factor IX deficiency. Interindividual differences in thrombin generation in patients with haemophilia could be interpreted from the results of the study by Chantarangkul et al (2003) but uncalibrated thrombography was used (Chantarangkul et al, 2003). Two studies have now reported results with CAT in patients with haemophilia (Luddington & Baglin, 2004; Dargaud et al, 2005). The study by Luddington and Baglin (2004) used 2 pmol/l TF and samples were taken into CTI, hence normal values for thrombin generation were lower than in the study by Dargaud et al (2005), which did not use CTI, despite the use of 1 pmol/l TF in that study. Luddington and Baglin (2004) studied 10 patients with haemophilia and found interindividual differences in the thrombin-generating capacity, relative to the level of deficient factor at the time of sampling. Dargaud et al (2005) studied 46 patients and found a correlation between ETP and level of deficient factor 24 h after infusion of factor concentrate. Patients with the same residual factor VIII concentration had different thrombin-generating capacities. All these results are in keeping with the hypothesis that the coagulation phenotype in patients with haemophilia is determined only in part by the genetic defect responsible for factor deficiency. However, assay imprecision at very low factor levels requires further evaluation before conclusions are drawn with regard to clinically significant interindividual differences in the thrombin-generating capacity in the untreated state. A variable phenotype in patients with severe haemophilia has also been suggested from a study using measurement of fibrin polymerisation kinetics (Shima et al, 2002).
Two studies have used CAT to investigate the thrombin potential in patients with thrombophilia, one with CTI (Luddington & Baglin, 2004) and one without CTI (Andresen et al, 2004). Luddington and Baglin (2004) studied 20 patients after completion of anticoagulant therapy after a single episode of idiopathic venous thromboembolism. Seven of 20 patients had laboratory evidence of thrombophilia. Five of 20 had increased ETPs. Andresen et al (2004) studied 24 patients with laboratory evidence of thrombophilia but only seven had suffered from thrombosis. Eight of 24 had increased ETPs.
The ability of a test to improve health outcome (clinical utility) depends on the performance characteristics of the test and the clinical characteristics of the condition to which the test is applied. A thrombin generation test must give reproducible results (low imprecision) and be able to detect a variable degree of coagulability (accuracy). The condition being investigated must be associated with a detectable difference in coagulability from normal (inter-individual variance and disease-attributable variance) and there must be a relationship between degree of coagulability and clinical outcome. The inherent individual variation in coagulability over time in both the pathological and normal state (intra-individual biological variance) must be known in order to interpret test results. Demonstrating clinical utility therefore requires appropriately designed studies that define these parameters and confirm the ability of the test to differentiate the pathological condition from normal. The test must not only identify a potential disorder but have a measurable predictive value for clinical outcome. Only if the test is shown to have predictive ability can it be expected to improve health outcomes. Therefore, the final step will be to demonstrate improved outcome in a management study. Designing such studies will be difficult. For example, in prothrombotic states, should the thrombin potential detect thrombophilia or should it detect a likelihood of thrombosis? The answer is evident from studies that have addressed the clinical utility of testing for thrombophilia. Testing for laboratory evidence of thrombophilia has little, if any clinical utility (Greaves & Baglin, 2000; Baglin & Greaves, 2002; Baglin et al, 2003). Therefore, it will not be sufficient for the measurement of thrombin generation to detect a degree of hypercoagulability associated with a thrombophilic defect; it will have to detect a degree of hypercoagulability that predicts likelihood of a thrombotic event. Thus, it will not be sufficient to correlate thrombin generation with laboratory evidence of thrombophilia in a case–control study. It will be necessary to correlate thrombin generation with thrombotic events in a prospective cohort outcome study. This will require prospective, large, blinded studies conducted over several years. Otherwise, the test will be, at best, a screening test for thrombophilia. Similarly, it will be difficult to demonstrate clinical utility in haemophilia patients. In severe haemophilia, the clinical phenotype is altered radically by factor replacement therapy and so it will not be possible to correlate thrombin generation with the natural history of the disease over a prolonged period of time. Measuring thrombin generation in infants will require definition of assay performance and disease characteristics specifically in this age group. Studies in older patients already established on replacement programmes will have to rely on surrogate markers, such as factor use required to minimise bleeding, or pharmacokinetic studies correlating trough thrombin generation with bleeding events. Monitoring drug therapy and relating thrombin generation to clinical events, thrombosis or bleeding, may ultimately be easier to perform but again prospective, large, blinded studies conducted over several years will be required and the variation in coagulability attributable to the drug will be a new parameter that will also have to be determined.
Indirect methods of thrombin measurement might also be used to measure the coagulation phenotype, examples include Thromboelastography (O'Donnell et al, 2004), the Overall Haemostasis Potential (He et al, 1999), the Thrombin Generation Time employing platelet contractile force (Carr et al, 2003) and Clot Waveform Analysis utilising fibrin polymerisation rates (Tejidor et al, 2001). With all assays, including direct thrombin measurement, predictive value cannot be absolute. The effect of the vessel wall, flow and pressure as well as the optimal concentration and location of the multiple procoagulant and anticoagulant factors cannot be mirrored in a test-tube. However, it is possible that a global measure of what can be tested in-vitro, rather than measurement of one or more factors in isolation, will provide a degree of prediction that, whilst not absolute, is clinically useful. For example, a degree of hypercoagulability that predicted a 30% likelihood of recurrence within a year of stopping anticoagulation after a first episode of venous thromboembolism might be considered to be of sufficient predictive value to consider long-term oral anticoagulation with warfarin. Any such clinical management decision would be influenced by the case-fatality rate of recurrent venous thromboembolism, the morbidity of recurrence and the mortality and morbidity risk of bleeding attributable to anticoagulation. Therefore, applying thrombin generation measurement to clinical decision making will require up-to-date estimates of risks relating to the disorder and the intended therapeutic intervention. Ultimately management studies will be required if clinical utility is to be proven.
Conflict of interest statement
Dr T Baglin is not aware of any conflict of interest. Specifically, he has never received funding of any form from Cambridge Biosciences, Thermo Labsystems or Synapse BV.