Keywords blood coagulation - platelets - hemodynamics - theoretical models - computer simulation
Computational models of hemostasis and thrombosis complement in vitro and in vivo
models by providing new tools for understanding these phenomena. Their advantage lies
in the ability to simulate and interrogate complex systems where intuition and empiricism
often fail. They are able to generate new hypotheses, to simulate conditions that
would be difficult or impossible to perform experimentally, to discover new mechanisms,
and to explain paradoxical experimental and clinical observations in a variety of
organs and pathologies.[1 ] Pharmacokinetics/pharmacodynamics models are commonly used to predict the response
to new drugs and for scaling animal models to humans.[2 ] Network models have identified new drug targets in cancer.[3 ] Electrophysiology models are used to simulate a wide range of cardiac diseases like
atrial fibrillation.[4 ] However, computational models have yet to be widely implemented in basic or translational
research in hemostasis and thrombosis. This is partly due to their descriptions being
highly technical and accessible only to the experts and researchers that develop them,
which hides their full potential. As a result, there is a lack of understanding of
how models are built, how and when they should be applied, and importantly, their
limitations. Our objective here is to demystify these approaches by walking the reader
through a description of one particular model and then providing examples of how that
model has led to discovery of new mechanisms in coagulation dynamics.
The first mathematical treatment of coagulation dates back to the 1950s.[5 ] Many of the seminal studies that serve as the foundation for today's computational
models were first developed in the 1980s and 1990s. These include models of thrombin
generation,[6 ]
[7 ] fibrinolysis,[8 ] and platelet adhesion and aggregation.[9 ]
[10 ]
[11 ] Over the last 20 years, advances in computational power and numerical methods[12 ] have paved the way for models of cellular- and molecular-scale phenomena such as
von Willebrand factor dynamics,[13 ] platelet margination,[14 ]
[15 ] platelet adhesion,[16 ]
[17 ] and red blood cell microrheology.[18 ] These phenomena clearly cannot be separated in vivo, but they are each uniquely
complex and thus models are often initially developed to focus on one “subsystem”
at a time. Computational advances have allowed the models to become more complex,
both in terms of the biology (e.g., number of biochemical reactions and blood cells)
and the computational expense necessary to simulate the systems (e.g., size of physical
system and timescale). Integrating models of these different subsystems of hemostasis
and thrombosis have begun, but these efforts are in their nascent stages.[19 ]
[20 ]
Belyaev et al recently published a review of computational models of thrombosis that
includes an insightful analysis of current challenges, model limitations, and future
needs.[21 ] Several invited published comments on this article underscore the varying opinions
on the needs, challenges, and strategies for the adoption of such models in research
and clinical practice.[22 ]
[23 ]
[24 ]
[25 ]
[26 ] We point the reader to this and other articles for reviews of mathematical models
of hemostasis and thrombosis.[27 ]
[28 ]
[29 ]
[30 ]
In this article, we review a single mathematical model of flow-mediated coagulation
and platelet deposition that we have developed,[31 ]
[32 ] extended,[33 ]
[34 ]
[35 ] and used to interpret and guide experimental studies.[36 ]
[37 ]
[38 ] We provide a “behind-the-scenes” view of this multidisciplinary scientific effort
and describe the thought process involved with model development, validation, and
application. In particular, we provide this insight in the context of the specific
problem of identifying modifiers of thrombin generation in hemophilia A.
Why Make a Mathematical Model?
Why Make a Mathematical Model?
Mathematical models are most useful when they can answer scientific questions and,
if a model does not exist to address a specific question, one could be designed with
the questions in mind. In this case, we want to answer the following question: how
does plasma composition alter thrombin generation during thrombus formation when factor
VIII (FVIII) is deficient? This question was motivated by the significant variation
in bleeding frequency and severity within clinical categories—severe, moderate, mild—of
hemophilia A,[39 ]
[40 ] which are not accurately predicted by standard laboratory assays.[41 ] The normal ranges of coagulation factor zymogens and cofactors and endogenous anticoagulants
is accepted as around 50% to 150% of the mean values of the healthy population.[42 ] This is a remarkably broad range as compared to, for example, tightly regulated
plasma ion concentrations.[43 ] The breadth in normal factor variation suggests hemostasis is a robust system capable
of fulfilling its physiologic function in the face of wide variations in its individual
components.
We hypothesized that normal variations in plasma protein composition could significantly
alter thrombin generation when one component, FVIII, is deficient. To systematically
vary the concentrations of all the zymogens, cofactors, and endogenous anticoagulants
simultaneously in an experimental system of thrombus formation under flow would be
both time consuming and expensive. In vitro and in vivo flow models of thrombus formation
are low throughput, even with the development of high content microfluidic devices,[44 ]
[45 ]
[46 ] relative to well-plate based biochemical and cell models of thrombin generation.
With a computational model we can investigate these variations efficiently, using
the model as a tool to guide more pointed experimental inquiry. The model can be used
to study variation in thrombin generation within a large parameter space that includes
plasma composition; platelet adhesion, aggregation, and binding sites; hemodynamics,
platelet margination, and mass transfer; and size and extent of injury. However, for
this work to be meaningful, we must be confident that the original model captures
the essential dynamics of thrombus formation.
The model we describe was first developed over 20 years ago.[31 ] Our goal then was to build a model of thrombin generation that integrated what was
known about the biochemical network of coagulation with ideas about the roles of platelets
and flow. At that time, there were no experiments that looked at the system as a whole,
and the model was intended to provide a tool with which to quantitatively examine
the ideas in the literature about how the system functioned. Our thoughts in building
the model were strongly influenced by views and data from the Mann lab[47 ] on the role of surface reactions in coagulation, ideas about the essential role
of platelets put forth by Monroe, Hoffman, and Roberts[48 ]
[49 ] and by Walsh and coworkers,[50 ] and our belief, based on the compelling studies of Turitto and coworkers,[51 ]
[52 ] that it was essential to consider flow. The model included the biochemical network
of coagulation as well as the role of platelets and blood flow in regulating this
network. It was developed to answer several important questions: How does local blood
flow regulate thrombin generation in the tissue factor (TF) pathway? How does binding
site density on activated platelets control bursts of thrombin generation? How does
platelet deposition on TF exposing subendothelium affect coagulation? This model predicted
that thrombin generation under flow depended on surface TF concentration in a threshold
manner and that small amounts of exogenous FXIa and TF worked synergistically to enhance
thrombin generation. Both predictions were experimentally verified.[37 ]
[53 ] This validation provides confidence that the model captures important qualitative
dynamics of thrombus formation under flow. But perhaps more importantly, the model
was also used to find the mechanisms underlying the predicted phenomena; in the case
of the TF threshold, the model revealed a race between platelet-bound tenase formation
and platelet coverage of the active TF surface. In the case of the TF/FXIa synergy,
the model showed that the platelet-bound tenase that formed with low TF and FXIa formed
quickly enough to support a thrombin burst, but did so in a unique way that specifically
exploits FIX/FIXa binding sites on activated platelet surfaces. What features of this
model enabled it to be a valuable tool? Its design. Designing useful models involves
many decisions including the type of model, what components of the underlying system
will be included, and how to incorporate them.
What Kind of Model should be Built?
What Kind of Model should be Built?
Models can be grossly split into two categories: explanatory or correlative. In practice,
however, most models contain elements of both. Explanatory elements are those that
incorporate the mechanism of interactions between variables: For example, we know that TF forms a complex with
FVIIa (see [Fig. 1 ] schematic). We can convert that schematic into a reaction scheme that describes
the binding and unbinding of TF with FVIIa with a rate of forming the complex (k
on ) and a rate of the complex breaking apart (k
off ). The reaction scheme can then be translated into mathematical equations that track
how the concentrations of each of TF, FVIIa, and TF:FVIIa change in time according
to the reaction scheme. These types of mathematical equations are called ordinary
differential equations (ODEs). The TF:FVIIa complex converts both FIX to FIXa and
FX to FXa in addition to being inhibited by TF pathway inhibitor (TFPI), and so the
reaction schemes that describe all of those steps would additionally be translated
into mathematical equations; in an explanatory model, equations are written for each
step in the process.
Fig. 1 Mathematizing biological schematics. The reaction scheme describing the binding and
unbinding of activated factor VII (FVIIa) with tissue factor (TF) and the corresponding
translation into mathematical equations. These equations here take the form of ordinary
differential equations, meaning that they track variations in time and not in space.
By contrast, correlative elements are more like a black box, we may know inputs and
outputs, but the relationships between variables are empirical or semiempirical. For
example, we do not know the precise relationship between soluble agonist concentrations
(adenosine diphosphate [ADP], thrombin) and platelet activation within a growing thrombus,
but we do have some sense of what agonist concentration leads to activation, so we
create a mathematical relationship between agonist concentration and platelet activation
that emulates the desired dose-dependent activation response. Explanatory models include
more detail, which may lead to complicated models (for a model of thrombus formation
this can be upward of 50 equations with even more parameters), but their advantage
is that they enable probing of the model for new mechanisms in a way that correlative
models do not. Because we are looking for mechanistic insight into our driving question—how
plasma protein composition regulates thrombin generation—an explanatory model will
help us to both identify the most important variables and investigate how those variables
interact within the complete coagulation network. This is because an explanatory model
is built using what is known about the mechanisms within the system, but the behaviors
produced when these components interact emerge only from studying the full model.
Next, important features of the system we are modeling need to be considered to determine
the best model for the job. The evolution of a thrombus is a dynamic process; the
platelet, zymogen, enzyme, cofactor, and anticoagulant concentrations change not only
in time, but also in space. Is the model dynamic or static, that is, does it change
with time or not, and does it incorporate spatial components and variations? A model
in which the components vary in time but do not vary in space is often called “well-mixed,”
since it assumes that any components are instantaneously well-mixed in space. These
models are typically built using the type of ODEs shown in [Fig. 1 ] to evolve model components forward in time. A model that accounts for variation
in space is typically built on partial differential equations to evolve the model components forward in time and track changes in space. One disadvantage of this type of model is the computational
cost of tracking the spatial variations for every element of the model. For small
injuries, such as those in microvascular bleeds common in hemophilia, we use a well-mixed
model.
Another important question to ask is if the system is open or closed? In an open system,
mass can go in and out. In a closed system, for example, modeling clotting reactions
in a test tube, no mass is added or subtracted. However, as a thrombus forms under
flow, blood flow transports platelets and plasma proteins into and out of the site
of injury. To represent that with a well-mixed model, we need to assign the rate of
transport for each component that is subjected to flow within the system. These rates
can be derived using theories of mass transfer,[54 ] but can be thought of more simply as being additional rate processes (written as
additional terms in the mathematical equations), like in chemical reactions, that
supply new reactants and carry away reaction products. In using this mass transfer
assumption, we are integrating some of the physics related to blood flow without adding
the computational cost of tracking spatial variations.
How Is the Model Built?
The first task in model design is deciding which variables to track with the model.
Variables can include things like which coagulation proteins and inhibitors to study,
the concentrations of those proteins, and which cells are included. Then it is necessary
to define how the variables interact with each other. That is, for each protein in
the coagulation network, we need to assign its interactions with other proteins and
surfaces (endothelium, subendothelium, activated platelets). Moreover, because we
are modeling the role of platelets, we also need to define how they adhere to the
subendothelium, cohere with each other, and are activated by wall-bound (e.g. collagen)
and soluble agonists (ADP, thrombin). This exercise requires curating the knowledge
base to define these interactions as discussed in detail in the manuscripts describing
our models.[31 ]
[32 ]
[33 ]
[Figure 2 ] shows a schematic of the variables' interactions considered in our model and the
reaction zone where the thrombus forms.
Fig. 2 (Color online) Schematic of coagulation reactions included in our model. Schematic
(A ) of the reaction zone where platelet deposition and coagulation reactions are tracked,
and (B ) of the endothelial zone into which thrombin can diffuse from the reaction zone,
and in which thrombin binds to thrombomodulin and produces activated protein C (APC)
which can diffuse into the reaction zone. (C ) Dashed magenta arrows show cellular or chemical activation processes. Blue arrows
show chemical transport in the fluid or on a surface. Green segments with two arrowheads
depict binding and unbinding from a surface. Rectangular boxes denote surface-bound
species. Solid black lines with open arrows show enzyme action in a forward direction,
while dashed black lines with open arrows show feedback action of enzymes. Red disks
show chemical inhibitors. APC, activated protein C; AT, antithrombin; EC, endothelial
cell; PC, protein C; TF, tissue factor; TM, thrombomodulin. Image Courtesy: Link et
al.[64 ]
To track how each variable changes in time due to these interactions, we need to formulate
an equation for each one. Based on the decisions described above, to model thrombus
formation we want a dynamic, open system that is well-mixed. Such a model can be represented
by a system of ODEs. We point the reader to our book chapters that describe these
types of equations in more detail.[28 ]
[55 ]
[56 ] Briefly, ODEs consider the rate of change of a variable in time to be equal to the
rates through which it participates in binding and unbinding events, chemical reactions,
or mass transfer. Since many variables depend on one another, the equations for their
rates of change often involve more than one of the variables. As such, the full set
of ODEs is called a “coupled” system that requires the ODEs to be solved simultaneously.
Fortunately, even large systems of coupled ODEs are quickly solved with a laptop computer.
For example, our ODE model with 86 coupled equations simulates 20 minutes of physiological
time in only seconds of computational time on a commodity computer. This makes ODE
models well-suited for screening large number of conditions.
Of course, computational time is not the only criteria to consider when building a
model. In order for the model to be useful it must also faithfully represent the complex
biophysical and biochemical phenomena involved with thrombus formation. To accomplish
both these goals requires making assumptions and approximations in our model. Basing
these assumptions on experimental data is, of course, ideal but not always possible.
Indeed, formulating reasonable and biologically relevant models requires experience,
guess-work, luck, and conversations between researchers to understand the consequences
from these assumptions.[57 ] A few important assumptions in our model are as follows:
(1) When a platelet adheres to the subendothelium, it blocks the activity of the TF:FVIIa
complex on the patch where it adheres.
(2) There are a finite number of binding sites for coagulation proteins to bind to
on endothelial cells, the subendothelium, and activated platelets.
(3) The extent of injury is varied by changing the surface density of TF.
(4) Platelets are treated like a chemical species in the model, with their own mass
transfer rates, that can adhere, aggregate, and activate via additional rate constants.
A full list of assumptions related to platelets, reactants, protein binding on surfaces,
and transport are found in reports describing the model.[31 ]
[32 ]
[33 ]
The equations of the model are only one part of the modeling process. All models depend
on parameters: initial conditions, biochemical rates, and physical properties of the
system. Our model of coagulation under flow has 122 parameters including initial concentrations
of plasma proteins, diffusion coefficients, reaction and binding rate constants, the
number of binding sites on various surfaces, and rates associated with platelet adhesion,
cohesion, and activation. In comparison to other modeled systems, for example, intracellular
signal transduction networks, coagulation is fairly well-characterized; the plasma
concentrations of each protein (and often their amount in platelet granules) are known.
Moreover, the network of protein–protein interactions is well defined, and the kinetic
rate constants for most protein–protein interactions have been measured in the fluid
phase and on the surface of phospholipids, as appropriate. However, not all rate constants
have been measured and, when model assumptions and approximations are made, new parameters
are sometimes created that are not measurable. In these cases, parameters must be
estimated. Finally, it is important to understand how the model behaves in response
to changes in individual parameters or groups of parameters. We discuss parameter
estimation and parameter sensitivity in the “Do we believe the model?” section.
Is the Model a “Good” Model?
Is the Model a “Good” Model?
With a model of a complex biological system in hand, we need to do a series of diagnostic
tests to make sure it is well behaved, similar to what one might do with an electronic
circuit. A good first test is to see if the model is internally consistent, which
in the case of a protein–protein network means there is conservation of mass. For
each component, the amount coming into the system minus the amount leaving the system
should equal the amount generated plus the amount that accumulates (in – out = generation + accumulation).
Model components go in and out of the system by blood flow-mediated transport; model
components can be generated or consumed by chemical reactions; and model components
can accumulate by binding to surfaces like those of activated platelets.
The next step is to see if the model is externally consistent; that is, how close
are the model results to existing data? Our model tracks the kinetics of thrombus
formation on a small patch of exposed endothelium, so it makes sense to compare its
output to the kinetics of experimental models of small, intravascular injuries. Using
in vitro flow assays for such a comparison is the most direct because flow rates and
surface TF concentrations are user-defined parameters. Using a platelet-bound thrombin
sensor, Welsh et al report a thrombin burst between 4 and 5 minutes in whole blood
flow assays on collagen-TF surfaces at 100 s–1 .[58 ] We note this is comparable to the timescale for a thrombin burst in our model under
similar conditions. In addition, in both the flow assay and the model, the thrombin
burst is preceded by rapid accumulation of platelets within the first 2 to 3 minutes.
The kinetics of platelet and fibrin accumulation and thrombin generation also occur
on similar timescales in the laser injury and pipette injury models in mice.[59 ]
[60 ] Our model predicts a steady, nearly linear increase in activated platelets over
time, in agreement with the increase in P-selectin positive platelets in these animal
models. Importantly, our model does not depict spatial variability, so it cannot depict
the core-shell structure observed in these experimental models. However, we have developed
other computational models that include spatial variations and recapitulate the observation
where the core-shell structure emerges based on transport limitation.[34 ]
[35 ]
[61 ]
Other tests of external consistency include varying model inputs like platelet count
and coagulation factor levels within the model and comparing model outputs to observations.
Severe thrombocytopenia in the model results in a drastic drop in platelet accumulation
and thrombin generation. Severe deficiencies in FVIII or FIX in the model delay the
onset of thrombin generation, as expected, but a decreased maximum thrombin concentration
is only observed in the model when platelet deposition blocks TF:FVIIa activity.
Do We Believe the Model?
How much confidence do we have in the model output? Mathematically, this question
is phrased in terms of model uncertainty and is studied with sensitivity analysis.
Because any complex model is necessarily the consequence of observations in different
settings and parameter estimates from different labs, we want to understand how uncertainty
in model inputs (parameters, biophysical characteristics, and initial conditions)
impacts the model output. In particular, if small changes in any model input lead
to large changes in model output this suggests our model is particularly sensitive
to these values. Of course, uncertainty in model inputs is inevitable and there are
many sources of uncertainty. There may be uncertainty in the kinetic rate constants
due to the experimental conditions they were measured under. There may be uncertainty
in plasma levels of clotting factors based on the broad levels of variation in those
plasma factors among individuals. And, there may also be uncertainty introduced by
the model formulation itself. For example, if the biochemical reaction scheme is missing
true interactions our model will not be depicting the right biochemical dynamics.
In addition, if we are using a system of ODEs, we will be neglecting potentially important
spatial variation through diffusion or flow. Here, we assume high confidence in the
model and want to quantify the uncertainty inherent in the model that comes from a
lack of knowledge about kinetic parameters and initial conditions.
In this context, we are interested in studying how the uncertainty propagates through
the model, from input to output. As we will describe in greater detail this enables
us to not only precisely characterize the robustness of the model output (in expected
ways) but to also attribute variation in model outputs (i.e., thrombin concentration)
to specific model inputs (i.e., biochemical parameters, initial conditions). This
type of analysis is particularly informative when the uncertainty of model inputs
has been characterized. For example, as we described above, the normal ranges of coagulation
factors vary over a significant range, and in our studies we consider each to vary
between 50% and 150% of their mean value. As such, a reasonable investigation of our
coagulation model would be to vary a clotting factor's plasma level (initial conditions
in our model) and quantify the extent to which the model output is changed. Our model
gives output in the form of concentrations for every chemical species in time, but
we are primarily interested in the impact on thrombin generation. We may also want
to study specific output metrics related to thrombin generation such as the time until
some desired amount of thrombin is generated, the maximum thrombin concentration generated
over some amount of time, or the maximum rate of thrombin generation.
Broadly speaking, there are two approaches to studying the sensitivity of model output
to model inputs. In local sensitivity analysis, we study the sensitivity of the model output to each input
on its own by varying each input over some specified range. In global sensitivity analysis, we study the sensitivity of the model output as all parameters
of interest are varied simultaneously. Both forms of sensitivity analysis have been
productively used on models of complex biological processes, and on coagulation in
particular. However, because global sensitivity analysis tells us the relationship
between multiple parameters it is able to identify combinations of proteins to which
the model output is especially sensitive.[62 ]
Uncertainty quantification and sensitivity analysis are often used to test model robustness.
Here we mean that since hemostasis is a robust system in healthy persons, our model
should emulate this. To test for robustness and overall sensitivity of our model,
we performed both a local and global sensitivity analysis on the model output metrics
described above because they relate to clinical assay outputs, that is, thrombin lag
time, maximum relative rate of generation, and concentration after a specified time.[38 ] Varying plasma levels of proteins within their normal range of 50% to 150% led to
strong thrombin generation within 3 to 5 minutes, which gave us confidence in model
robustness ([Fig. 3 ]). Variations in kinetic rate constants between 50% and 150% of literature-derived
values led to more serious variations in output, and in some cases led to little or
no thrombin generation. Not all rate constants have been measured and those that have
been measured have been done under a variety of conditions using a variety of techniques
that likely affect their measured values, so there is some uncertainty in their “true”
values in vivo.
Fig. 3. (Color online) Thrombin concentration dynamics under flow generated by varying plasma
zymogen and anticoagulant levels under normal (A ) and severe factor VIII (FVIII) deficiency (B ) conditions. Data represents 110,000 simulations where levels were uniformly varied
from 50% to 150% of the population mean values. The mean (solid black line) and boundaries
that encompass 50% of the data (pink), and 90% of the data (orange), and the maximum/minimum
of all solutions (gray-dashed). The surface tissue factor concentration in (A ) is 15 and in (B ) is 5 fmol/cm2 . Image Courtesy: Link et al.[38 ]
[64 ]
Is the Model Useful?
In addition to testing model robustness, sensitivity analysis of model parameters
within known ranges can also be used to make predictions. Let us return to our hypothesis
that normal variations in plasma protein levels can significantly modify thrombin
generation in hemophilia A. We can perform a similar global sensitivity analysis in
which the plasma protein levels of clotting factors were varied simultaneously but
the FVIII level was fixed to be low (1% of normal). We used 1 nM as a critical thrombin
concentration because it can activate platelets through protease-activated receptor
1.[63 ] The result of this was the prediction that prothrombin and FV levels have the strongest
effect on thrombin generation when FVIII is low (1%).[64 ] As one might intuit, high prothrombin levels were associated with increased thrombin
generation. Surprisingly, low FV levels in the range of 50% to 70% were necessary to push thrombin concentrations
above 1 nM in the model, while prothrombin levels near the high end of the normal
range enhanced this effect. We verified the model's unexpected prediction with an
in vitro flow assay that is an experimental analog to the model. In those assays,
whole blood samples from individuals with FVIII deficiencies were perfused over a
collagen-TF surface. Treatment with a partial function-blocking antibody against FV
resulted in significant fibrin deposition, which was further enhanced by adding exogenous
prothrombin.
With the model prediction experimentally verified, we now want to use our model as
a tool that helps discover possible mechanisms. Studying the model in greater detail
revealed a mechanism that might explain the counterintuitive result that low normal
levels of FV could enhance thrombin generation in hemophilia: substrate competition
for FXa. We have since hypothesized two other possible mechanisms ([Table 1 ]): inhibition of FVIIIa by activated protein C (APC) and TFPIα associated with FV.
The mechanism revealed by the model is a consequence of the fact that the initial
FXa generated by TF:FVIIa has two substrates, FV and FVIII. When FV levels are reduced
from normal to low-normal, more FXa is available to convert more FVIII to FVIIIa.
This in turn results in more FVIIIa:FIXa, which yields more FXa and subsequently FVa:FXa,
ultimately producing more thrombin. The second potential mechanism is a consequence
of FV's role as a cofactor, along with protein S, for APC in FVIIIa degradation in
the tenase (FVIIIa:FIXa) complex.[65 ] In this mechanism, reduced FV levels would result in less FVIIIa degradation and
consequently more thrombin generation. Finally, the third potential mechanism stems
from reports that TFPIα may be associated with circulating FV.[66 ]
[67 ] In this mechanism, reduced FV levels would result in reduced TFPIα and thus reduced
inhibition of TF:FVIIa and FVa:FXa. The FXa substrate competition mechanism is what
we predicted from the model in its current state. Although our model currently includes
APC inhibition of FVIIIa and FVa, and TFPI inhibition of FXa and thus TF:VIIa, it
does not consider APC inhibition of FVa and FVIIIa while they are in the tenase or
prothrombinase complex nor TFPI inhibition of prothrombinase formation. As such, the
model has revealed several mechanisms to study in future research.
Table 1
Summary of potential mechanisms that explain how low normal FV enhances thrombin generation
in hemophilia A
Reaction(s)
Mechanism
Explanation
FV as a FXa substrate
Substrate competition for FV and FVIII by initial FXa generated by TF:FVIIa
FV as a cofactor in FVIIIa degradation
Reduced FVIII degradation leads to more tenase (FVIIIa:FIXa) production
FV as a carrier of TFPIα
Reduced plasma FV correlates with reduced plasma TFPIα leading to less inhibition
of TF:FVIIa and FVa:FXa
Abbreviations: APC, activated protein C; FV, factor V; TFPI, tissue factor pathway
inhibitor.
The Goals and Value of Computational Models
The Goals and Value of Computational Models
The primary goal of the type of models described in this article is to make predictions,
not to merely agree with existing experimental observations. If the model predictions
are borne out in experiments, then the model quantitatively describes the system for
a certain set of conditions. All models eventually fail to be externally consistent
for some set of conditions, and at that point they must be revised. However, discrepancies
between model results and experimental observations should not be viewed as a failure
of a model, but rather as a seed for new discovery. The conversation between the models
and experiments results in new questions that, in many cases, would not arise otherwise.
Moreover, a useful model need not be consistent with every feature of the experimental
data to be powerful. Our model of coagulation under flow does not include every known
detail of coagulation biochemistry and platelet biology, yet it has made several interesting,
and in some cases counterintuitive predictions that were verified by experiments.
Even more exciting, in our opinion, is that the model predictions have motivated new
and unexplored lines of inquiry. Indeed, part of the art of model building lies in
the tension between providing adequate detail to describe the underlying physics,
chemistry, and biology without descending into a level of complexity that is computationally
intractable or stretches beyond the limits of what can be approximated quantitatively
using existing knowledge.
Conclusion
We have highlighted the benefits of taking a computational approach to studying hemostasis
in several ways. First, the novel prediction that low normal FV levels enhanced thrombin
generation in hemophilia A was made with a computational model that was verified in
experimental models. Second, a new mechanism was proposed, that remains to be verified
experimentally. Third, new questions arose around the pro- and anticoagulant roles
of FV in the context of hemophilia. Fourth, our study motivated the consideration
of additional interactions that will be built into future versions of the model, which
will only refine it and enable it to further contribute to understanding the relative
importance of these three mechanisms under different conditions. This demonstrates
the power of computational modeling and sensitivity analysis to study hemostasis.
We believe that the key to our successes has, in part, been the faithful integration
of knowledge about biochemical and biophysical processes on many scales and in a computationally
tractable way; this careful modeling enabled us to use the model itself as a tool
to make predictions and hypothesize new mechanisms that motivate future experimental
studies. Additionally, we stress the importance of working in a multidisciplinary
team where ideas, expertise, data, and intuition are openly shared. We believe that
this type of collaborative work is likely to continue yielding success in predictive
and mechanistic studies of the hemostatic system.