Models for thrombin generation and risk of disease

Authors

  • K. Brummel-Ziedins

    Corresponding author
    1. Colchester Research Facility, University of Vermont, Colchester, VT, USA
    • Correspondence: Kathleen Brummel-Ziedins, Colchester Research Facility, University of Vermont, 208 South Park Drive, Room 235B, Colchester, VT 05446, USA.

      Tel.: +1 802-656-9599; fax: +1 802-656-2256.

      E-mail: kbrummel@uvm.edu

    Search for more papers by this author

Summary

Computational models can offer an integrated view of blood clotting dynamics and may ultimately be instructive regarding an individual's risk of bleeding or clotting. Appropriately, developed and validated models could allow clinicians to simulate the outcomes of therapeutics and estimate risk of disease. Computational models that describe the dynamics of thrombin generation have been developed and have been used in combination with empirical studies to understand thrombin dynamics on a mechanistic basis. The translation of an individual's specific coagulation factor composition data using these models into an integrated assessment of hemostatic status may provide a route for advancing the long-term goal of individualized medicine. This review details the integrated approaches to understanding: (i) What is normal thrombin generation in individuals? (ii) What is the effect of normal range plasma composition variation on thrombin generation in pathologic states? (iii) Can disease progression or anticoagulation be followed by understanding the boundaries of normal thrombin generation defined by plasma composition? (iv) What are the controversies and limitations of current computational approaches? Progress in these areas can bring us closer to developing models that can be used to aid in identifying hemostatic risk.

Introduction

Despite clinical screening techniques and methodologies, cardiovascular diseases and stroke are still the major causes of morbidity and mortality in industrialized nations [1-3]. One of the main reasons that this continues to be the case is that the vast majority of individuals who suffer from coagulopathies, that is, individuals without obvious genetic deficiencies, have blood coagulation systems that are not clinically identified as abnormal by routine screening tools and factor assays [4]. A variety of congenital polymorphisms and various pro- and anticoagulant factor levels are associated with aberrant hemostasis [5-7]. However, in cases of chronic vascular disease or surgical intervention, there are no blood indicators that signal pharmacologic intervention prior to an acute event or define the intensity of anticoagulant therapy for an individual. Anticoagulant therapy also incurs a significant burden on morbidity with 10–30% of treated individuals experiencing major hemorrhages necessitating hospital admission [8-12]. The identification of individuals that are at risk [13-16] for hemorrhage or thrombosis is a critical area of research that could benefit from innovative technical methods.

The coagulation of blood is the initial phase of the biological repair process that responds to perforating trauma to the vasculature; its function is to stop blood loss from the circulatory system by establishing a temporary barrier between the intra- and extra-vascular compartments. The enzyme thrombin is a central product of the response to vascular injury, displaying procoagulant, anticoagulant, antifibrinolytic and cellular effects; the magnitude and timing of these effects are critical to normal hemostasis [17]. The generation of thrombin can be measured by many different techniques. Pioneering work by Hemker and colleagues [18], Mann and colleagues [19], and others [20-22] have demonstrated the vast majority of thrombin is generated well after the plasma (or blood) clot time, which is the traditional endpoint for the aPTT [23] and PT [24, 25] assays. In recent years, thrombin generation assays, thromboelastography, and waveform analysis have gained in popularity and become accepted as useful tools to measure ‘global hemostasis’ [26-28], that is, capturing the complete dynamics of the coagulant response beyond initial clot formation. Their potential to supersede the diagnostic value of clot time–based assays is an area of active exploration.

A parallel effort to understand thrombin generation (hemostasis) has been the development of computational models of the coagulation process. The extensive understanding of the component inventory, connectivity, and dynamics of this process, that is, available from empirical studies has been exploited to generate descriptions of this reaction network using ensembles of ordinary differential equations (ODEs) [29-34] or more elaborate mathematical constructs for both closed and flow-based model systems [32, 35-45]. Implicit to this type of physicochemical modeling is the principle that initial concentrations of reactants will direct outcomes (i.e. product distribution) and thus to the extent that individuals vary in the concentrations of coagulation components their response to a given hemostatic challenge will vary. In principle, coagulation factor protein levels at any given time reflect the sum of developmental, environmental, genetic, nutritional, infectious agent, and pharmacological influences on the liver and other organs that regulate their synthesis and turnover [46]. Thus, there is a linkage between the overall health of the individual and this type of model representation of their hemostatic status.

Work from this laboratory pioneered the fusion of an ODE-based model of tissue factor-initiated coagulation with individual coagulation factor composition data as a method to describe the hemostatic status of an individual and compare it to other individuals [47-53]. Recently, two other laboratories have reported similarly focused computational analyses; one using the same computational model as this laboratory [54, 55] and the other an expanded model to include flow [44].

In this laboratory's approach to modeling individuals using specific composition data, the coagulation network description has been limited to seven circulating precursor proteins (factors (f) II, V, VII/VIIa, VIII, IX, X) and two or three inhibitors (antithrombin (AT), tissue factor pathway inhibitor (TFPI) and protein C). The rationale for this has four parts: (i) The magnitude of the normal range variation of these soluble proteins between individuals is greater than the measurement uncertainty for these proteins, a methodologic precondition for their use to discriminate among individuals; (ii) These proteins appear to be central to the process of tissue factor-initiated thrombin formation [56] and its regulation by anticoagulant agents. Absolute deficiencies in any of these are either incompatible with life [57], or result in bleeding disorders (fII, fV, fVII, fVIII, fIX, fX) or thrombosis (AT, protein C) [57, 58]. Additionally, the importance of the four procoagulant vitamin K–dependent proteins (fII, fVII/fVIIa, fIX, and fX) to normal hemorrhage control is exemplified by their status as primary targets for the anticoagulants warfarin [59] and unfractionated heparin (UFH) [60], both of which have been used for over 60 years. These two therapeutic agents mirror each other in the scope of their action, because UFH potentiates the inhibition of all of the procoagulant enzymes that warfarin anticoagulation targets indirectly by suppressing the levels of their functional precursors; (iii) The mathematical representation of the interactions of these proteins in the reaction network appears valid, based on the congruence between empirical reconstructions of this limited network and model descriptions [29, 61]; and (iv) Availability of populations with the necessary composition data. Even with the small subset of factors used in the model described in this review, databases containing a complete battery of inputs for individuals are scarce.

This review describes the progress made in the use of computationally derived thrombin profiles to understand an individual's hemostatic status. This work is part of the larger scale effort by many groups to computationally represent hemostasis and its derangements. Efforts to computationally understand the boundaries of normal thrombin generation and to build connections between computationally derived representations of normal and aberrant thrombin generation and the corresponding clinical states will be discussed. Several examples will be utilized to show current approaches.

Methods

The overall computational approach aims to describe the hemostatic status of an individual and compare it to those of other individuals using an empirically validated computational description of the tissue factor pathway to thrombin generation combined with individual coagulation factor composition data [47-53]. These computational studies are complemented by the use of additional empirical systems to investigate the dynamics of hemostasis and coagulopathies to improve our understanding of the mechanisms underlying the observed dynamics (Fig. 1). Our approaches are briefly described below.

Figure 1.

A schematic showing the relationship between empirical and computational assessment of individuals.

Study populations for modeling analysis

The ideal study populations are ones in which individuals with well-defined clinical histories are evaluated from a baseline state through an event (i.e. bleeding, thrombosis, surgery, prophylaxis) with blood obtained at multiple time points for empirical evaluation of the dynamics of coagulation along with coagulation factor composition. In reality, most preexisting study populations that are available for this type of analysis have not anticipated the need for comprehensive coagulation factor analysis and often do not have uncommitted plasma samples which can be utilized. Currently, there are ongoing prospective studies that specifically designate plasma samples for this purpose.

In this review, a few of the populations that have been studied utilizing these approaches are described, including: (i) Healthy controls [47, 62] and cases [49, 62] from the Leiden Thrombophilia Study (LETS); (ii) a severe hemophilia A population [51, 63] in which whole blood analyses, factor composition, and bleeding phenotype was evaluated; (iii) an atrial fibrillation population (n = 20, aged 59 ± 6 years) studied from baseline through warfarin anticoagulation at days 3, 5, 7, 14, and 30. Plasma was collected and factor analyses performed as described [64].

Empirical methods

Empirical systems are used for several reasons: (i) to determine mechanism-based alterations to the dynamic process of thrombin generation; (ii) to evaluate how changes to thrombin generation alter thrombin's other substrates (platelets, fibrin, etc.); (iii) to quantitate thrombin as a biomarker to be used for epidemiological evaluation; and (iv) as a basis to validate or expand mathematical model outputs. Several different empirical models of blood coagulation are utilized including; whole blood assays [65-68], thrombograms [69], thromboelastography [70], and synthetic proteome assays [71, 72]. In this review, data utilizing a whole blood assay are shown, and the focus is to show how this empirical model leads additional insight into each individual's dynamic coagulant response.

Computational methods

Our empirically validated mathematical models of the blood coagulation system have been previously described [29, 73, 74]. These models are built around a series of ODEs, which make use of rate constants derived from experimental measurements made under conditions of saturating concentrations of phospholipids [29]. The base model describing the tissue factor pathway [29, 73] makes use of the following inputs: empirically determined functional concentrations of fII, fV, fVII/VIIa, fVIII, fIX, fX, TFPI, and AT. The protein C model [74, 75] uses all inputs from the base model as well as the empirically determined protein C concentration and thrombomodulin concentrations potentially representative of those found in the vasculature [75]. Following data entry, simulations are initiated with a tissue factor stimulus and solved for any of the ~60 species over time. The resulting time courses for different outputs have been evaluated as described previously [47-51, 76, 77] including thrombin and fXa generation [62, 76]. The compositional influence of each protein (or combination of proteins) on the computational model outputs has been determined both theoretically [63] and in specific populations [49, 72].

For each individual's thrombin generation profile, the output is evaluated by parameters (see Fig. 2) that include: the maximum level and rate of thrombin generation, total thrombin generated (the area under the curve) and the time to 2 nM α-thrombin, which corresponds to clot time in our empirical studies [68]. Each individual is then depicted by a positioned, colored ball of a specific size, a collective representation of the four thrombin parameters extracted from the respective thrombin profiles. Time to clot (y-axis) and max rate parameters (x-axis) position each individual, while color indicates the max level and size defines the total thrombin parameter.

Figure 2.

Thrombin generation profile reflecting the dynamics observed in a closed model system. A computationally simulated time course of thrombin generation with all factors at their mean physiological level and a 5 pM tissue factor stimulus is shown. Also indicated are the thrombin parameters (time to 2 nM thrombin (clot time), total thrombin, maximum thrombin, maximum rate) used in this analysis. From [63].

Results

Computational thrombin generation in individuals

It has been reported that intrinsic to an individual's blood, there is a defined propensity for any given individual to respond with a characteristic level of thrombin for a constant tissue factor stimulus [78]. As well, changes within the coagulation factor composition of these individuals, all within the clinically accepted normal range, shift the thrombin generation curves to give an individualized effect [47, 78]. Such discriminating effects have been observed when individuals were grouped by characteristics such as age, gender, obesity, and oral contraceptive use [47, 49]. For example, when the influence of gender on computationally derived thrombin generation is further examined, women display greater thrombin potential than men [47, 49, 79]. This thrombin is increased further in women on oral contraceptives [49] and those who receive in vitro fertilization treatment [48], with poststimulation ß-estradiol concentration correlating with thrombin (r2 = 0.9). Jordan et al. [44] have also used a flow-based computational model to generate thrombin generation profiles on the LETS population and have reported similar results. In addition, women with a protein C mutation are more prothrombotic than men with the same mutation [79].

In analyses of pathologic states, normal range variation in coagulation factor composition can modulate the relative severity of the thrombin generation defect. To illustrate this, an empirical- and plasma-composition-based computational modeling evaluation on the same hemophilia A individuals was analyzed for thrombin generation (Fig. 3). Figure 3 (top) displays time courses of thrombin generation in tissue factor initiated whole blood for 11 individuals with severe hemophilia A (< 1% fVIII at the time of the draw). These individuals show obvious variation in total thrombin generated (~5-fold) and max rates (~3-fold). Figure 3 (bottom) shows composition-based (inset Table) computationally derived thrombin generation profiles from four of these individuals. These results demonstrate that each individual is characterized by a unique ensemble of pro- and anticoagulant factors which underlies an individual's specific thrombin profile and is in part responsible for the variation observed between individuals in empirical systems. As seen in panel B, different ensembles of composition (inset Table) can produce similar thrombin generation profiles (Compare subject A to B and C to D).

Figure 3.

Empirical and computational representations of hemophilia. (A) Empirical thrombin–antithrombin complex generation in 11 individuals with severe hemophilia A. From [80]. (B) Computationally derived thrombin generation profiles using factor composition of four of these individuals (A–D). Inset table shows factor concentrations in percentages.

These initial observations were pursued in a more detailed study of hemophilia individuals. A challenge for this type of study is that hemophiliac individuals use fVIII prophylactically, thus fVIII levels vary temporally. Each individual can be described as having a Cmax and Ctrough values on a daily per weekly per monthly basis. Twenty-five genetically severe hemophilia A patients were studied using both whole blood assays and computational analyses based on their coagulation factor composition [51]. As a result of prophylactic fVIII, at the time of the blood draw, the individuals had fVIII levels that ranged from < 1% to 22%. Thrombin generation (maximum level and rate) in both empirical and computational systems increased as the level of fVIII increased. Exogenous fVIII was then suppressed (either using an inhibitory α-fVIII antibody or simulating fVIII = 0), allowing for the anaylsis of thrombin generation in the complete absence of fVIII. The computational analysis showed a moderate negative correlation (r2 = 0.34) between bleeding history and maximum thrombin levels in the absence of fVIII. Thus, the integrated effect of each individual's coagulation factors (outside of fVIII) results in a baseline thrombin potential (as a severe hemophiliac), which may affect bleeding risk. This approach (neutralizing fVIII) can offer a way to understand the basis and scope of variation in thrombin generation dynamics between hemophiliacs at baseline. If one can assess their baseline capacity at any point in time that information can possibly be translated into a more efficient prophylaxis regimen for that individual.

Additional studies have analyzed thrombin generation in the following groups and related these clinical phenotypes to plasma coagulation factor composition: genetic bleeding tendencies [51, 67, 80, 81] and clotting tendencies [75-77]; chronic obstructive pulmonary disease [53]; stroke [50]; rheumatoid arthritis [52]; and cardiovascular disease [72]. In all cases, thrombin generation was more pronounced in individuals with procoagulant vs. bleeding tendencies [67, 80, 81]. Individuals with acute conditions appeared to have more pronounced thrombin generation than individuals with chronic conditions [50, 72]. Table 1 summarizes the results from these studies analyzing composition-based computationally derived thrombin generation and shows which subsets of factors control the observed difference in thrombin generation between the indicated populations. We speculate that changes in the balance of pro- and anticoagulant factors reflect disease-specific alterations affecting synthesis.

Table 1. Factors which best define thrombin generation in the population
PopulationFactorsRef
ACS vs. CADII, VIII, AT [72]
COPD vs. controlII, VIII, IX, TFPI [53]
Stroke (acute vs. previous)II, TFPI, AT [50]
Rheumatoid arthritis vs. controlVIII, TFPI [52]
In vitro fertilization (pre and post)VIII, AT, TFPI [48]
PC mutation (female vs. male)TFPI [79]
Oral contraceptives vs. controlII, IX, TFPI, AT [62]

Computationally defining the levels of normal thrombin generation

What normal range variation means in the healthy population is still an open question. The scope of normal thrombin generation phenotypes in the population of apparently healthy people is not known, nor is how all individuals vary over time. Therefore, our laboratory developed an approach [63, 82] to produce a representation of the distribution of possible thrombin generation phenotypes that might be found in individuals, by taking the eight factors used in the computational model and varying them across their clinically acceptable [83] normal range.

Each factor was set to extreme low, mean physiological (factors at 100%) and extreme high values, yielding 38 permutations, each one of which was modeled [63]. This results in a set of unique ‘individuals’ representing the theoretical healthy population of thrombin generation phenotypes. A graphic representation of each individual was created that reflects the magnitude of their thrombin parameters depicted by a positioned, colored ball of specific size representing the collective of four parameters (Fig. 4).

Figure 4.

Thrombin generation phenotypes in a hypothetical population defined by normal range variation in factor levels. Each individual in the population (6561) is defined by four thrombin parameters and their phenotype represented graphically by a positioned colored circle: y-axis—time to 2 nM thrombin, range = 2.3–15 min); x-axis—maximum rate of thrombin generation, range = 0.1–12.4 nM s−1); color—maximum thrombin level, range = 23 (dark blue)–792 nM (brown)); size—total thrombin, range = 8179 (smallest dot)–134 340 sec•nM (largest circle). An individual with all factors at the mean physiological values is depicted, the arrow indicating the individual's position in the population. Also shown is the LETS healthy population (n = 473) represented in the upper right and its position in the hypothetical normal population outlined in red. From [63].

In general, ‘weak’ thrombin generators are represented by small blue circles localized toward the upper left region, while ‘strong’ thrombin generators appear as large reddish circles localized toward the lower right region. Figure 4 (inset) also shows the distribution of individuals in the LETS control cohort [47]. The 2- to 20-fold larger ranges predicted for the thrombin parameters of the theoretical population reflect factor ensembles that were possible in the LETS population (given the factor composition ranges) but that did not occur. The wider ranges of thrombin parameters characterizing the theoretical population have two potential origins: a methodologic one due to its larger size, emphasis on the extremes of each factor range and its treatment of all possible ensembles as of equal probability; or a biological one reflecting the fact that some ensembles, perhaps those resulting in individuals with the more extreme characteristics in Fig. 4, are consistent with coagulopathic states and thus would not be found in a healthy population despite the fact that all individual factor levels fall within the normal ranges.

Figure 5 compares actual hemophilia A and warfarin-treated individuals to the theoretical healthy population. Thrombin parameters were extracted from simulated thrombin profiles generated using coagulation factor composition data from 16 hemophiliacs and 65 warfarin-treated individuals (note that the size of the circles has been increased 5-fold relative to Fig. 4 to improve visibility). The hemophilia population is positioned outside the most extreme phenotypes in the ‘weak’ thrombin generator region of the normal population, while the warfarin individuals are localized within the fringe of this region of the theoretical normal population. It is worth noting that the three individuals in the warfarin population that subsequently had a thrombotic event (circled in yellow) are positioned differently than the successfully anticoagulated individuals. Thus, some theoretically normal phenotypes are not ‘normal’, appearing similar to warfarin-induced phenotypes.

Figure 5.

Thrombin generation phenotypes in hemophilia A individuals and individuals undergoing warfarin therapy. Plasma factor composition was used to generate time courses of thrombin generation, thrombin parameters extracted and each individual represented as described in the Fig. 4 legend. The x-axis (max rate) is truncated (0–1.5 nM s−1), and the size of each individual symbol (total thrombin parameter) has been increased by a factor of five relative to Fig. 4 to improve visibility. Panel A: 16 individuals with severe hemophilia A (fVIII: 0.07% to 1% mean physiological). Panel B: 65 individuals stablely anticoagulated with warfarin (INR values between 2 and 3.3). The three individuals who subsequently had a thrombotic event are circled in yellow. Panel C: region of the hypothetical population distribution displaying the most similar thrombin generation parameters; the boundaries of the distributions of the hemophilia (green) and warfarin populations (orange + yellow (three individuals)). From [63].

An extension of this approach of representing each individual as an ensemble of their thrombin parameters to longitudinal studies is illustrated in Fig. 6. Figure 6 presents changes over time for a set of atrial fibrillation subjects in response to the onset of warfarin therapy [64]. All subjects, including the three highlighted (S1, S2, and S3), have a reduced thrombin generating capacity in response to warfarin therapy. After 3 days on warfarin, subjects S1, S2, and S3 have reduced peak and total thrombin and a reduced maximal rate of thrombin generation compared with their individual baselines. In addition, each subject has a slightly prolonged lag time. This trend continues through day 5 where S2 and S3 are approaching a stable thrombin generating capacity suggesting stable anticoagulation. By day 30, all three subjects are stably anticoagulated, which is implied by their consistent but drastically reduced thrombin generating capacity. A video showing the dynamic thrombin generating capacity over time can be viewed online [64].

Figure 6.

The kinetics of warfarin anticoagulation in patients with atrial fibrillation. Thrombin generating capacity was simulated by inputting each subject's factor composition into our mathematical model. Each point (circle) in the figure is representative of a single individual's thrombin generating capacity before (day 0) and during warfarin therapy. All subjects, including the three highlighted (S1: subject 1, S2: subject 2, and S3: subject 3), show a time-dependent reduction in thrombin generating capacity (marginally increased lag time, decreased maximal rate, decreased peak, and total thrombin) in response to warfarin therapy. Note that the peak thrombin scale ranges from 0 to 500 nM. From [64].

Computational outputs: beyond thrombin

Previously, it has been shown that plasma composition simulated thrombin generation can yield overlapping thrombin profiles despite differing concentrations of factor levels [47, 49, 62]. Therefore, we chose to evaluate additional computational outputs to further discriminate between individuals. We selected fXa as an output measure because it is the transducer of the thrombin generation signal. The main function of fXa is to participate in the prothrombinase complex (fXa-fVa-membrane-Ca2+). Because fXa is a major player in the coagulation process, it is a target for regulation by synthetic inhibitors in treating ischemic heart disease and cerebrovascular disease [84]. We utilized plasma composition from healthy individuals (controls) and individuals with thrombosis (cases) to simulate fXa generation [62]. Our results showed that fXa generation parameters vary over a 10- to 12-fold range. This variation is larger than that observed with thrombin (3–6-fold) in the same population. Factor Xa generation discriminated well between defined clinical risk subgroups. Most notably, healthy women on oral contraceptives showed > 60% increase in some Factor Xa parameters compared with women not on oral contraceptives. In addition, individuals with similar thrombin generation profiles displayed differing fXa generation profiles, and there appeared to be a linkage between the extent of the differences in the fXa generation profiles and the timing of thrombin generation in these pairs of individuals (Fig. 7). Analysis of simulated fXa generation has the potential to be a more sensitive discriminator than simulated thrombin generation among individuals.

Figure 7.

Comparing factor Xa and thrombin generation. Overlapping thrombin generation profiles from the healthy LETS population are compared with their fXa profiles. Three pairs of overlapping individual thrombin generation profiles are shown on the left. The dashed line indicates the mean time to the maximum level of thrombin generation within the healthy population (432 s). The corresponding fXa generation profiles in the same individuals are shown on the right. From [62].

Discussion

Progress to date

This review summarizes work exploring the capacity of computational models focused on the enzymatic reactions leading to thrombin generation to translate individual-specific coagulation factor composition data into an integrated assessment of an individual's hemostatic status. Overall, results from the field include: (i) populations segregated by clinical criteria relevant to hemostasis display computationally derived thrombin generation profiles consistent with the clinical phenotype when analyzed by closed [48, 50, 52, 53, 62, 72, 76, 79] or flow-based [44] systems; (ii) in some hemostatic disorders, a specific pattern of expression of a small ensemble of coagulation factors may be sufficient to explain the overall phenotype relative to the control group [48, 50, 52, 53, 62, 72, 76, 79]; (iii) changes in the pro- and anticoagulant balance are associated with inflammatory states [52, 53, 85, 86], reflecting alterations in their synthesis from disrupted hepatocyte function [46]. Whether alterations in liver function result in the observed ‘disease specific’ of ensembles of altered factor levels requires further exploration; (iv) apparently healthy individuals with clinically unremarkable factor levels may present thrombin generation profiles typical of individuals with hemostatic complications [47, 62]. Whether such individuals represent a population predisposed for future coagulopathies or derives from the incompleteness of the models used is not established [63]; (v) compensation by the ensemble of other coagulation proteins in individuals with specific factor deficiencies can ‘normalize’ an individual's thrombin generation process and represents a rationale for their unexpected phenotype [51, 75]; (vi) extension of computational models to simulate dilutional coagulopathies that are observed in trauma patients can track the response to therapeutic reagents [54, 55]. Collectively, these data support the utility of computational modeling as a method of understanding the hemostatic consequences of normal and pathologic variation in plasma coagulation factor composition, especially in instances where the pathologic variation involves coordinated, clinically unremarkable changes in several factors.

Current limitations

The utility of computational modeling both as a tool for understanding coagulation processes in complex biological fluids and as a tool for risk prediction continues to be strongly debated [87, 88]. Issues currently contested include: (i) whether computational modeling of thrombin generation has any value given the availability of empirical global assays [89]; (ii) whether adequate computational models can ever be constructed given the imperfect knowledge of reaction pathways and the recognized error intrinsic to rate constant and reactant concentration measurements [63, 82, 87, 90]; (iii) what constitutes adequate empirical validation [87, 88] of the network description embodied in a computational model; (iv) to what extent the incompleteness of current computational models affects their utility, that is, can incomplete or partial models be informative in understanding differences between individuals that contribute to differences in clinical hemostatic phenotype [89]; and (v) whether the replacement of closed models with flow models will be necessary to achieve a tool with clinical utility. Notwithstanding these concerns, the potential of computational approaches to be useful in the realm of clinical testing continues to be investigated.

A central issue in developing a clinically relevant model of coagulation based on physicochemical descriptions of the reactions is the tension between the inclusiveness of the model (its relative level of congruence with the biological network) and the capacity to measure the actual physiochemical parameters (i.e. initial concentrations of reactants and rate constants) included in the model. To date, models used to probe coagulation processes on an individual basis do not describe all the hemostatic processes (e.g. platelet activation, fibrin formation/lysis, contact activation). In part, this is because empirical validation of the outputs of these more complex network descriptions is technically challenging. A number of groups are working to validate these more inclusive models using a variety of approaches [74, 90-92].

A second type of challenge faces the use of more complex models. With respect to comparatively modeling the coagulation systems of individuals in the human population, the governing assumption is that the rate constants are invariant except when a specific mutation is presented that alters the function of a key enzyme or substrate (e.g. fV Leiden). Measurement error in rate constants, which would affect the modeling of all individuals in a population, is the primary source of uncertainty [82]. In contrast, the concept of initial species levels is complicated by issues beyond measurement uncertainty. First, there is a lack of average or individual-specific data concerning the in vivo concentrations (or surface level density) of vascular wall (e.g. TFPI, thrombomodulin) or circulating cell-associated components important to the coagulant response. (Of course, the absence of a vascular component characterizes all clinically applicable global thrombin assays.) Second, the concentrations of the ~30 soluble coagulation precursors and inhibitory proteins [83] accessible to assay in plasma samples are known to vary considerably (often ± 40–50% of the population mean for each factor [63]) in a population of healthy individuals. Thus, to model individuals, one needs to measure the relevant factors. Such analyses entail considerable expense and represent a practical barrier that increases as a model requires more factor inputs. Analyses of thrombin generation profiles from groups paired by disease status [44, 48, 50, 72] have suggested that differences in a small subset of factors are causing the observed differences in thrombin generation profiles. These subsets appear to be disease specific; further work is needed in this area, but any reduction in the number of analytes required for a clinically useful computational tool would be desirable.

In summary, despite current limitations, I would argue that existing models represent useful tools that allow us to understand how variation in coagulation factors can influence thrombin generation. Extension of these models to include readily available clinical analytes like platelets and fibrinogen appears both sensible and achievable.

Complementation between approaches to thrombin generation analyses

The assessment of the potential of an individual's blood or derived plasma fraction to generate thrombin has and continues to be the primary method of hemostatic monitoring; defects in thrombin generation are identified by assay performance differences comparing an individual's outcome to an outcome typical of apparently healthy individuals. Historically, these assays are designed to monitor clot time as the indicator of hemostatic competence and are most applicable to gross differences in composition, for example, severe deficiencies of specific factors [93]. More recently, ‘global’ thrombin assays have provided a more robust account of the flux of thrombin generation in closed systems after tissue factor initiation and their applicability to the diagnosis of coagulopathies is an area of active research [21, 26, 94-98]. However, as with the clot-based assays, those readouts, whether defined as typical or atypical, do not explain the origins of their features and as to why one individual appears the same or different from another. The modeling-based approach discussed in this review requires coagulation factor analyses of each individual's citrate plasma sample, but yields a representation of an individual's coagulation state based on current understanding of the dynamics reflecting proteins at their physiological concentrations and native conformations that is easy to dissect. It provides a mechanism-based rationale for understanding how concentration changes in some factors that are characteristic of a given disease process (e.g. inflammatory syndromes) might have different hemostatic consequences in different individuals. Ideally, these approaches would be used in tandem.

Future directions

Determining who is at risk for thrombotic events (venous and arterial) is complicated because thrombosis is a multicausal disorder [99, 100]. The development of computational models of thrombin generation applicable to tracking hemostatic disease progression and identifying when individuals are at risk is in its infancy. Whether progress in this field will require the use of more complex model descriptions remains to be determined. Longitudinal studies with relevant factor composition data will be essential; the goal is to generate algorithms that combine clinical history, clinical measures that include an empirical global assessment of thrombin generation and computationally derived components to predict risk. Potential approaches to identifying and integrating the types of information critical to developing predictive algorithms include the use of evolutionary learning machines and logistic regression analyses.

Acknowledgements

This review was supported by Program Project grant HL46703 (Project 5) from the National Institutes of Health. I would like to acknowledge the contributions of Dr. Kenneth Mann, Dr. Thomas Orfeo, Dr. Stephen Everse, Dr. Chris Danforth, Dr. Jonathan Foley, and Matthew Gissel to this work and Dr. Anetta Undas, Dr. Frits Rosendaal, Dr. Kelley McLean, Dr. Ira Bernstein, and Dr. Georges Rivard for providing patient data.

Disclosure of Conflicts of Interest

The author states that she has no conflict of interest.

Ancillary