Time-scale separation – Michaelis and Menten's old idea, still bearing fruit

Authors

  • Jeremy Gunawardena

    Corresponding author
    1. Department of Systems Biology, Harvard Medical School, Boston, MA, USA
    • Correspondence

      J. Gunawardena, Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA

      Fax: +1 617 432 5012

      Tel: +1 617 432 4839

      E-mail: jeremy@hms.harvard.edu

    Search for more papers by this author

Abstract

Michaelis and Menten introduced to biochemistry the idea of time-scale separation, in which part of a system is assumed to be operating sufficiently fast compared to the rest so that it may be taken to have reached a steady state. This allows, in principle, the fast components to be eliminated, resulting in a simplified description of the system's behaviour. Similar ideas have been widely used in different areas of biology, including enzyme kinetics, protein allostery, receptor pharmacology, gene regulation and post-translational modification. However, the methods used have been independent and ad hoc. In the present study, we review the use of time-scale separation as a means to simplify the description of molecular complexity and discuss recent work setting out a single framework that unifies these separate calculations. The framework offers new capabilities for mathematical analysis and helps to do justice to Michaelis and Menten's insights about individual enzymes in the context of multi-enzyme biological systems.

Abbreviations
DB

detailed balance

GPCR

G-protein coupled receptor

KNF

Koshland, Némethy and Filmer

MTT

matrix-tree theorem

MWC

Monod, Wyman and Changeux

PTM

post-translational modification

tgCE

total generalised catalytic efficiency

tgMMC

total generalised Michaelis–Menten constant

Time-scale separation in enzyme kinetics

The year 2013 is the 100th anniversary of Leonor Michaelis and Maud Menten's paper that introduced their famous mathematical formula for the rate of an enzymatic reaction [1, 2]. There are many instructive lessons in this paper [3], although the focus of the present review is on one particular aspect of what they did, which has ramified through biochemistry, pharmacology, molecular biology and, now, systems biology. Michaelis and Menten considered the reaction scheme:

display math(1)

in which free enzyme, E, binds reversibly to a substrate, S, to form an intermediate enzyme–substrate complex, ES, which then irreversible breaks down to free the enzyme and yield the product, P. The labels on the reactions are the rate constants, assuming mass-action kinetics. Michaelis and Menten derived from this scheme their rate formula:

display math(2)

in which inline image is the maximal rate of the reaction, inline image, and inline image is the Michaelis–Menten constant.

There is something quite odd about the relationship between the reaction scheme in Eqn (1) and the rate formula in Eqn (2). The former involves the free enzyme E and the enzyme–substrate complex ES but these components have disappeared from the latter. The only vestige left of the enzyme is its total amount inline image in the expression for the maximal rate. The total amount does not change over the course of the reaction, and so is a conserved quantity, not a dynamical variable. All other enzyme-related components have been eliminated.

To pull off this sleight-of-hand, Michaelis and Menten used a time-scale separation. They assumed that, under their in vitro conditions, in which substrate was in considerable excess of enzyme, the enzyme–substrate complex would rapidly form and reach a quasi-steady state, in which d[ES]/dt = 0. We might say, informally, that the enzyme-related components are assumed to be fast variables, which rapidly reach steady state, whereas the substrate and product are slow variables, which adjust to this steady state. (Formally, in biochemical systems, it is the reactions which are fast or slow, relatively speaking, not the components, a point to which we will return below.) With a little algebra, which has struck terror into the hearts of generations of students, the enzyme-related components can be eliminated in favour of the total amount of enzyme inline image, from which Eqn (2) falls out.

Two historical points should be made here. First, this is not quite what Michaelis and Menten did. They used a different time-scale separation (a rapid equilibrium assumption) and it was Briggs and Haldane who suggested the more appropriate steady-state assumption that is now standard [4]. Second, they were not quite the first to use time-scale separation, as we will discuss below, although they were certainly the first in terms of influence.

Enzymologists rapidly took up the method of time-scale separation to analyze more complicated reaction schemes than that in Eqn (1) and there are now enzyme-rate formulas that cover a wide-range of enzymological contexts and include the impact of inhibitors and other kinds of effectors [5]. An interesting feature of these formulas is that they are always rational functions in the slow variables. That is, the right hand side is a ratio in which the numerator and the denominator are both sums of products (polynomials) in the concentrations of the slow variables. This may not appear to be particularly remarkable for the original Michaelis–Menten formula in Eqn (2) but it is a striking and universal feature of more complex formulas. The necessary algebraic manipulations, which get very intricate very quickly, were eventually codified in the King–Altman method [6], to which we will return.

Eliminating variables such as ES is, of course, a very good thing because, at the time of Michaelis and Menten, nobody knew anything about them. They were theoretical entities, suggested by the experimental data. It is often forgotten that Michaelis and Menten never characterized the enzyme–substrate complex (for the enzyme invertase which they studied) and they never measured its rates of assembly and disassembly (inline image and inline image in Eqn (1)). The first person to do so, for the enzyme peroxidase, was Britton Chance, no less than 30 years after Michaelis and Menten [7]. This did not stop enzymologists from enthusiastically using such theoretical entities in the intervening years. Biology is actually more theoretical than physics; biologists just like to pretend otherwise [8].

The Michaelis–Menten formula has been hugely important [1]. It is perhaps the one quantitative mathematical statement that any biologist working at the molecular level would be expected to know. Unfortunately, its very familiarity has bred, not respect, but, rather, ignorance. The elimination of the enzyme–substrate complex has meant that such complexes have been lost from view, so that enzyme sequestration is all too readily overlooked [9, 10]. The formula is also widely used in the wrong contexts. In particular, Michaelis and Menten assumed that product formation was irreversible because they measured initial reaction rates when product was negligible. Yet, the formula is habitually used in contexts, such as phosphorylation and dephosphorylation cycles, in which the amount of product could be substantial [11]. Michaelis and Menten would have been horrified. One of the goals of this review is to explain how we can start to do justice to what Michaelis and Menten taught us a century ago. Before setting out to do so, let us examine some of the other contexts in which time-scale separation has been used.

Other applications of time-scale separation

Allosteric proteins

If we fast forward by 50 years from 1913 to 1963, we come to Jacques Monod, Jean-Pierre Changeux and François Jacob's famous paper [12] on what Monod would later call ‘microscopic cybernetics’ [13]. Monod, Changeux and Jacob pointed out that, for a biosynthetic pathway to balance supply against demand through feedback inhibition, it was necessary for enzymes to be regulated by effectors that were chemically different from their normal substrates or products. They introduced the idea of allostery, in which an effector binds at a site distinct from the normal catalytic site and triggers or stabilizes a conformational change in the enzyme, which then alters its catalytic activity.

The general form of such an allosteric model is that an enzyme, or more generally a protein such as haemoglobin that performs a transport function rather than catalysis, can exist in multiple conformations, inline image, and a ligand (or several such) can bind to multiple sites on the protein. If, for example, there is a single ligand with k binding sites, then there are inline image potential patterns of ligand binding, or ‘microstates’. Not all conformations need be equally accessible to the ligand but, in principle, there could be a total of inline image relevant microstates. There is much internal complexity, with the microstates playing a similar role to the enzyme–substrate complexes in enzyme kinetics.

To analyze such a system, the time-scale separation is made in which it is assumed that conformational transitions and ligand binding have reached thermodynamic equilibrium. Some microstates may have high activity, others low activity and the overall activity of the protein is taken to be an average over the equilibrium distribution of microstates. For a transport protein such as haemoglobin, an appropriate average is the fractional saturation: the proportion of sites that are bound by oxygen. Under the equilibrium assumption, the microstates can be eliminated, which is to say that, in a similar way to the intermediate complexes in enzyme kinetics, they can be calculated in terms of binding affinities and conserved quantities such as the total amount of protein. This yields formulas for the average activity as a function of the ligand concentration. As with enzyme kinetics, the functions are rational.

Specific allosteric models make specific assumptions within this general set-up. In Monod, Wyman and Changeux's (MWC) ‘plausible model’, the protein is assumed to be a multimer that exists in two quaternary conformations, traditionally called ‘relaxed’ and ‘tense’, in which the tertiary structure of the individual monomers does not change [14]. The equilibrium is assumed to exist prior to ligand binding, so that ligand binding is a selective process that biases the equilibrium towards the relaxed or the tense conformation. In Koshland, Némethy and Filmer's (KNF) more general model, tertiary changes to the monomers are permitted and, as might have been expected from Dan Koshland's previous introduction of the induced-fit mechanism [15], ligand binding is an instructive process possibly inducing conformational changes that did not previously exist [16]. Although the MWC model is often treated as the standard description of allostery, presumably because the algebra is less terrifying than for the KNF model, the former cannot accommodate negative cooperativity, whereas the latter can, as Dan Koshland often pointed out [17].

A variety of other allosteric models have been put forward over the years that generalize and mix and match the MWC and KNF assumptions [18-21]. In each case, thermodynamic equilibrium is assumed as a time-scale separation between the fast microstates and the slow interaction of the protein with its environment, although different methods are used to undertake the elimination of microstates and calculate the rational functions. In some cases, elementary algebra suffices, whereas, in others, equilibrium statistical mechanics is called upon, as discussed below. Allostery remains a potent conceptual idea with wide application across biological disciplines, even if the meaning of the term has broadened from the rather precise ideas of those who pioneered it.

Gene regulation

Allostery was not, of course, the only breakthrough to which Monod and Jacob contributed. If Crick and Watson revealed the structure of DNA, with its profound implications for genetics and heredity, Monod, Jacob and Lwoff, through their studies of the lac operon and of λ phage, established that genes could be turned on or off in response to environmental signals [22]. They were awarded the Nobel Prize in Physiology or Medicine in 1965 for this work. Although gene regulation was first revealed in unicellular microbes, it is fundamental to understanding multicellular development. A hepatocyte and a cardiomyocyte from the same organism have identical genotypes; their profound phenotypic difference arises from differential gene expression [23].

Let us fast forward again, by nearly 20 years, from allostery in 1963 to 1982, when Gary Ackers, Sandy Johnson and Madeleine Shea published their ‘quantitative model for gene regulation by λ phage repressor' [24]. The λ repressor, or master regulator protein CI, binds to the right operator region of λ phage and thereby regulates the mutually exclusive expression of its own cI gene and the neighbouring cro gene. Expression of the former leads to the phage behaving as a lysogen, which replicates passively by hitching a ride on the bacterial DNA, whereas expression of the latter leads to the lytic state, in which the virus actively creates more copies of itself and eventually kills its host. The lysis–lysogeny decision is a classic cellular decision process [25] from which new insights continue to emerge [26].

Ackers, Johnson and Shea created the first mathematical model for how expression of cI depended on the concentration of λ repressor. It was known that λ repressor bound in dimerized form to three specific sites in the operator, giving inline image potential DNA microstates. The importance of cooperative interactions between repressor molecules bound at different sites had been uncovered in in vitro experiments undertaken by Sandy Johnson in Mark Ptashne's lab oratory at Harvard [27]. The mathematical model was intended to reveal how that cooperativity contributed to the lysis–lysogeny decision. Ackers, Johnson and Shea made the very reasonable time-scale separation that repressor binding to DNA had reached thermodynamic equilibrium. They knew from the experimental studies that certain microstates repressed cI. They used equilibrium statistical mechanics to calculate the probability of each microstate in terms of a partition function, thereby eliminating the microstates and allowing calculation of the probability of repression of cI as a function of λ repressor concentration. The free energies of repressor binding to DNA and of cooperative interactions between repressor dimers at different sites, which are required to build up the partition function, were estimated from Johnson's previous experimental data. Here again, rational functions emerge from the calculation, albeit with numerical, as opposed to symbolic, coefficients.

The ‘thermodynamic formalism’ developed in this work has been the starting point for much subsequent analysis of gene regulation. In later work, Shea and Ackers treated the RNA polymerase holoenzyme as another component that could bind DNA and thereby induce mRNA transcription [28]. This allowed a more nuanced treatment of how transcription factor binding could influence gene expression through cooperative interactions with RNA polymerase. In effect, mRNA transcription is treated as a slow process that averages over the equilibrium distribution of microstates. The formalism can accommodate multiple transcription factors each binding at multiple sites, yielding insights into combinatorial regulation [29] and the evolution of regulatory circuits [30]. Rob Phillips and colleagues elaborated the thermodynamic formalism for the bacterial context [31, 32].

Gene regulation is quite different in eukaryotes. DNA is no longer naked but is wrapped around nucleosomes and compacted into chromatin, which may exhibit varying accessibility to transcription factors [33]. Nucleosomes are dynamic entities that can be assembled, moved and disassembled and the tails of the histone proteins that make up the nucleosome octamers are festooned with a quite astonishing number of post-translational modifications (PTMs) [34]. It has been suggested that these combinatorial PTMs influence gene expression through some kind of ‘code’ [35, 36], or perhaps a ‘language’ [37], whose biochemical basis remains obscure [38]. Transcription factors both recruit and are recruited by accessory proteins and co-regulators, such as the massive Mediator complex and a variety of components that can reshape the local chromatin environment [39]. This enormously complex machine is a testament to the creative powers of evolution and, presumably, a necessary adjunct to the huge expansion of gene regulatory complexity that is found in moving from bacteria to unicellular microbes such as yeast to multicellular organisms such as animals [23].

Notwithstanding this increased complexity, the thermodynamic formalism has also been widely influential in studying eukaryotic gene regulation [40, 41]. For example, it has been used to analyze patterning in the early Drosophila embryo, where it appears that much can be explained by transcription factor binding [42-45]. It has even been extended to accommodate nucleosomes by treating them as components that bind to specific DNA sequences in competition with transcription factors [46].

Despite these successes, there is a fundamental problem with this approach. Eukaryotic mechanisms, such as nucleosome remodelling or histone PTM, are dissipative. They necessarily expend energy and may reach a steady state but never a thermodynamic equilibrium. The significance of dissipative mechanisms in the molecular realm was pointed out by John Hopfield in a seminal paper [47], in which he showed that certain information processing tasks (in his case, error reduction in transcription and translation) could not be undertaken at equilibrium but required dissipative expenditure of energy. It is very plausible that complex gene regulation in eukaryotes can undertake forms of information processing, such as precise embryonic patterning, which fundamentally depend on dissipative mechanisms. The thermodynamic formalism can shed no light on this because it is based on equilibrium assumptions. We will return to this issue below.

Pharmacology and receptor theory

Michaelis and Menten were not quite the first to use time-scale separation. They were anticipated in 1909 by an undergraduate student at Trinity College, Cambridge who was doing a research project with the physiologist John Newport Langley [48]. The student was Archibald Vivian Hill and this was his first published paper, for which he was sole author. Hill would become one of the founders of biophysics and a winner of the Nobel Prize in Physiology and Medicine in 1922 for his work on muscle physiology. We also know him for the widely used and abused Hill function and attendant Hill coefficient that came out of his work on haemoglobin.

Langley had been one of the first to suggest that the action of drugs on tissues came about through an interaction of the drug with a specific molecular receptor in the tissue [49, 50]. Hill considered the reversible binding of a drug L to a hypothetical receptor R:

display math

This is just Michaelis and Menten's reaction scheme in Eqn (1) without the catalysis. Hill made the time-scale separation that binding had reached equilibrium, whereas the downstream effect of ligand binding took place on a slower time scale through the equilibrium proportion of ligand-bound receptor. A similar but easier process of elimination to that of Michaelis and Menten allowed the latter proportion to be calculated and the familiar hyperbolic saturation of downstream effect with ligand concentration provided a good match to data that Hill acquired for the action of nicotine and of curare on muscle.

Here again, just as with Michaelis and Menten, the mathematical calculation provided evidence for the existence of a molecular receptor, although, here, it is R and not RL whose existence is being conjectured. Just as in enzyme kinetics, the theoretical idea of a receptor proved irresistible to pharmacologists who quickly appreciated that quantitative measurements and equilibrium binding models were the key to unravelling how drugs acted [51]. It would take nearly 70 years from Langley's suggestion before receptor molecules were actually shown to exist [8], during which time binding models were essential conceptual tools [52]. Bob Lefkowitz, who was awarded the 2012 Nobel Prize in Chemistry for his isolation and characterization of one of the first receptors, the β-adrenergic G-protein coupled receptor GPCR, points out in his Nobel lecture (http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2012/lefkowitz-lecture_slides.pdf) the significance of the ternary-complex binding model [53] for understanding GPCRs. Similar equilibrium binding models are basic ingredients in modern quantitative pharmacology [54] and, as above, rational functions emerge from the elimination that describe receptor activation as a function of ligand concentration.

As David Colquhoun recounts [55], quantitative pharmacology appears to have occupied a parallel universe to very similar studies in enzyme kinetics and allosteric proteins, with surprisingly little cross-fertilization despite similar underlying ideas. Models with distinct receptor conformations, strikingly close to the idea of protein allostery, emerged in classic studies by del Castillo and Katz in 1957 of the acetylcholine receptor [56], and played a key role in attempts to disentangle the notions of drug affinity and drug efficacy [57]. Similar models, albeit more complicated, have been suggested to explain the ‘collateral efficacy’ through which distinct ligands for GPCRs elicit distinct subsets of downstream effects [58]. The study of ion channels, which provides some of the most exquisite quantitative data through patch clamping, has been one context in which the classical Monod–Wyman–Changeux models of protein allostery and the equilibrium binding models of pharmacology have coalesced, aided, no doubt, by Jean-Pierre Changeux whose work bridged the two fields [59, 60].

Post-translational modification

Histones are not the only proteins subject to PTM. Indeed, it is hard to find a cellular protein which is not post-translationally modified. Not only are there many types of modification, including phosphorylation, acetylation, ADP-ribosylation, GLcNAcylation, ubiquitination, sumoylation, etc [61], but also there are often many sites on a protein subject to the same modification. PTM is fundamentally dissipative with forward modification and reverse de-modification being catalysed by separate enzymes. The free energy for driving such processes ultimately comes from background cellular processes that maintain the concentration of donor molecules (such as ATP in the case of phosphorylation) sufficiently high compared to the concentrations of their hydrolysis products (such as ADP and inorganic phosphate). PTM provides a way for the structure of a protein to be altered dynamically in response to changing cellular conditions. It is a mechanism of escape from the constraint of genetic coding that appears to have been as essential for evolutionary novelty as the gene regulation discussed above [38].

The potential combinatorial complexity in multisite PTM is staggering. Considering just phosphorylation, which is merely a binary on/off modification in contrast to the further complexity that comes from iterated polypeptide modifications such as ubiquitination [38], a protein with n sites of modification has inline image potential modification states, or ‘mod-forms’ [38]. The serine/arginine repetitive matrix factor Srrm2 has 300 experimentally detected phosphorylation sites, as reported on Phospho.ELM [62]. Although this example is perhaps extreme, it is not unusual for mammalian signalling proteins to have tens of sites of modification. Clearly, not all mod-forms may be present in any context but this begs the question of which mod-forms are present. Evidence from a variety of contexts shows that distinct mod-forms can exert distinct downstream effects; for example, by offering alternative recruitment patterns for modification-specific binding domains [38]. It is, therefore, the distribution of mod-forms that determines the overall effect of a modified protein and this distribution is regulated by the collective actions of the network of forward- and reverse-modifying enzymes.

We have been developing mathematical approaches to analyzing PTM. We followed the strategy that has been widely employed in the literature ever since ‘futile cycles’ of phosphorylation and dephosphorylation were first mathematically analyzed by Chock and Stadtman [63] and Goldbeter and Koshland [64]. We made the time-scale separation that modification and demodification are fast, whereas the downstream effects of modification are slow. In this way, modification and demodification can be assumed to have reached steady state. The time-scale separation is usually not mentioned explicitly in the literature but is always implicitly present whenever steady states are considered.

We found, to our surprise, that it was possible to eliminate much of the internal complexity at steady state [11, 65, 66]. For example, for a substrate S that is subject to modification by an enzyme E and demodification by an enzyme F on n sites, it was possible to write down a pair of equations for the free enzyme concentrations, [E] and [F]:

display math(3)

which correspond to the conservation laws for the two enzymes [66]. The salient point here is that these conservation laws can be expressed solely in terms of [E] and [F] (and the total amount of substrate, inline image, which is hidden away inside inline image and inline image). This is true no matter what the value of n. The substrate mod-forms and the intermediate complexes arising from the enzyme mechanisms have all been eliminated. They can be calculated from [E] and [F] by rational functions, once again. This exponential reduction in complexity allows rigorous conclusions to be drawn for systems with any number of sites that were previously beyond reach [66].

These calculations were conducted using ad hoc algebraic methods and it is fair to say, looking back, that we did not understand them. In particular, we were very puzzled as to why the rational functions that emerged had the property of ‘positivity’: they always gave a positive result for positive values of [E] and [F] [[66], supplementary information]. Of course, this is what they should do to be realistic. The rational functions in the previous examples also exhibit positivity. However, the algebraic methods that we used for PTM appeared to require miraculous cancellations for positivity to emerge. Miracles are a sign of some more fundamental principle at work. This first came to light for PTM [67], although its implications have turned out to be much broader [68], as we discuss below.

Time-scale separation in reality

Michaelis and Menten felt justified in making a time scale separation because, under their experimental conditions, substrate was in excess over enzyme. The enzyme–substrate complex would be expected to form quickly and then remain fairly steady until much of the substrate had been consumed. The steady-state assumption seems very reasonable.

It is harder to justify the assumption in some of the other examples discussed above, especially as we move away from the in vitro setting. For example, in gene regulation, especially in eukaryotes, is it really the case that transcription factor binding to DNA has reached equilibrium before gene expression begins? Probably not. However, the time-scale separation assumption has still been extremely useful in offering a way to think about the system and providing quantitative formulas that can be experimentally tested. Gertz, Siggia and Cohen used the thermodynamic formalism to analyze libraries of synthetic promoters with up to five transcription factors in yeast, and found that it could explain as much as 75% of the available variation in gene expression, after allowing for the variation between biological replicates [69]. Although this is not a direct test of equilibrium, it suggests that the assumption is not awful.

As we move from individual enzymes to multi-enzyme systems, the value of time-scale separation lies more in providing a methodology for simplifying the description of molecular complexity, even when we might not know that the time scales are well separated. In other words, it shifts the burden of the argument from the assumptions that we make to the conclusions we draw. In biological contexts where the experimental capabilities are well developed, such as gene regulation in the model organisms of yeast and flies, this strategy has been very successful [43, 45, 70].

The mathematical justification for time-scale separation as a good approximation relies on the method of singular perturbation or the use of Tikhonov's Theorem, in which it is assumed that the separation between fast and slow components increases as a small parameter goes to zero [71]. For simple biochemical systems such as the Michaelis–Menten scheme in Eqn (1), appropriate parameters can be readily identified [72, 73]. However, biochemical systems come to us with reactions that may be relatively fast or slow, not components; any given component may be influenced by a mix of fast and slow reactions. Reich and Sel'kov argued by example for the existence of a time-hierarchy of ultrafast, fast and slow components in core metabolism, to which Tikhonov's Theorem could be applied [74]. More recently, Lee and Othmer, using the more general framework of chemical reaction network theory, showed how a time-scale separation on reactions could be reorganized into one on components, in a form suitable for Tikhonov's Theorem [75]. It has also been a theoretical and experimental concern to know how long it takes for a steady state to be effectively reached. Easterby was the first to define a transition time to steady state [76] and subsequent studies have extended this idea to wider classes of networks with more general boundary conditions, within a predominantly metabolic context [77]. Although of interest in their own right, these issues lie outside the scope of the present review, which is concerned with the process of elimination at steady state, to which we now turn.

The linear framework

Laplacian dynamics on graphs

The applications of time-scale separation discussed above cover a wide range of biological areas and differ considerably in their biochemical details. Some are assumed to be at thermodynamic equilibrium whereas others are assumed to be at steady state far from equilibrium. With the exception of the statistical mechanical procedures sometimes used at equilibrium, the calculations are undertaken by ad hoc methods among which it is difficult to see any common ground. Nevertheless, there are tantalizing similarities. In each example, time-scale separation enables some of the internal complexity to be eliminated, with these internal quantities being explicitly calculated in terms of rate constants and conserved quantities, and the resulting formulas are invariable rational functions that exhibit positivity.

These similarities are not an accident. All of the calculations above are instances of a single mathematical procedure, which we call ‘the linear framework’ [68]. This not only unifies many biological studies that were previously thought to be separate, but also provides a way to do new kinds of analysis.

The framework starts from a labelled, directed graph G, consisting of vertices 1,⋯,n and directed edges, inline image, decorated with labels a ( Fig. 1). The graph should have no self loops, ii, and we will take for granted that it is connected, so that it does not consist of separate pieces between which there are no edges. For the moment, the labels are just symbols corresponding to positive numbers and having units of timeinline image.

Figure 1.

The linear framework. Left: a labelled directed graph, which is strongly connected. Right: Laplacian dynamics on this graph, in which the edges are treated as chemical reactions under mass-action kinetics. The resulting matrix is the Laplacian matrix of the graph.

We can define a dynamics on G as follows: place concentrations of material at the vertices and consider each edge to be a chemical reaction under mass-action kinetics with the label as the rate constant. The reaction inline image will then convert i to j at a rate given by inline image, where inline image is the concentration of material at vertex i. (Such concentrations are implicitly functions of time and we should write inline image. We mostly avoid doing so as not to clutter up the notation. The meaning should be clear from the context.) Because each edge has only one source vertex, the dynamics is described by a linear equation:

display math(4)

in which x is the column vector of concentrations and inline image is a n × n matrix called the Laplacian matrix of G. (The notation A.B signifies the product of matrices A and B, with a column vector being treated as a n × 1 matrix.)

Concepts similar to the Laplacian were first introduced in Gustav Kirchhoff's studies of electrical circuits [78]. Laplacian matrices have been widely studied in graph theory [79] but with differing conventions and normalizations. The ‘laplacian’ appellation stems from the fact that, with suitable normalization, they can be seen as discretizations of the Laplace operator [80].

It may appear implausible that such linear chemistry has anything to do with the nonlinear biochemical systems considered in the examples above. Nonlinearity is imported into the framework through the labels. These may be arbitrary rational expressions composed of actual biochemical rate constants and concentrations of actual biochemical species. The only constraint is the ‘uncoupling condition’, which states that any component whose concentration appears in a label cannot also be a component that is represented by a vertex in the graph. For example, if the vertices represent fast components in a time-scale separation, then the labels could include the concentrations of slow components. Uncoupling is essential to preserve the linearity of the dynamics.

The key to applying the linear framework is to set up the labels in such a way that the uncoupling condition is satisfied and the linear Laplacian dynamics given by Eqn (4) reproduces the nonlinear dynamics of the actual biochemistry. Let us see how this works in enzyme kinetics for the reversible Michaelis–Menten scheme:

display math(5)

We construct a labelled, directed graph on the fast components, which, in this case, are the free enzyme E and the intermediate enzyme–substrate complex ES, with the labelled edges summarizing the biochemistry in Eqn (5):

display math(6)

This graph is strongly connected and the uncoupling condition is satisfied. The Laplacian dynamics is given by:

display math(7)

and it is easy to see that these linear equations correspond precisely to the nonlinear equations coming from the reaction scheme in Eqn (4) under mass-action kinetics. You might say that nonlinearity with simple rate constants has been traded for linearity with complicated labels.

Laplacian matrices such as that in Eqn (7) and Fig. 1 have a characteristic structure with the sum of the entries in each column being zero. This is because of a conservation law. Because material is neither created nor destroyed but simply moved around the vertices, its total amount is conserved at all times:

display math(8)

It follows that 1.dx/dt=0 where 1 is the all-ones row vector. Because x is arbitrary, it follows from Eqn (4) that 1. inline image.

Steady states and elimination

Time-scale separation requires calculation of a steady steady, corresponding to a vector inline image such that inline image. Equivalently, from Eqn (4), inline image, so that inline image is in the kernel of the Laplacian: inline image. The kernel depends in an interesting way on the structure of G. Recall that a graph is strongly connected if, given any two distinct vertices i and j, there is a sequence of consecutive edges from i to j all pointing in the same direction. Because i are j are arbitrary, there must be a corresponding directed sequence from j to i. The graph in Fig. 1 is strongly connected, as are all the examples in Fig. 2. It can be shown that, if G is strongly connected, then the kernel of the Laplacian is one dimensional [[67], lemma 1] or [[83], proposition 3]:

display math(9)
Figure 2.

Labelled, directed graphs from different contexts. Labels are omitted for clarity. Black edges have labels that are biochemical rate constants, whereas blue edges have labels that are algebraic expressions involving concentrations of other components. (A) Enzyme kinetics, in which an enzyme E is irreversible and follows a random-order bi-bi mechanism, as studied previously [81]. (B) Gene regulation, in which a single transcription factor binds to two sites. (C) Protein allostery, in which a protein exists in two conformations, R and T with ligand binding at two sites. (D) Post-translational modification, in which a substrate S is modified and unmodified in random order on two sites, as studied previously [65]. (E) Receptor pharmacology, with the extended ternary-complex model, adapted with permission [82] (fig. 10) . R, receptor; R*, allosteric conformation; H, hormone; G, G-protein.

This result has remarkable implications. We can place any concentrations of matter at the vertices initially; there are as many degrees of freedom available as there are vertices in the graph. Once the dynamics gets under way, it eventually reaches a steady state. [This is true for any graph and any initial condition [83].] In a strongly-connected graph, once a steady-state has been reached, Eqn (9) tells us that only one generalized degree of freedom is left. Aside from this, the steady-state is completely determined by the structure of the graph and is independent of the initial conditions. Equation (9) is the key to elimination.

To derive formulas for the eliminated quantities, it is necessary to calculate a standard basis element inline image from the structure of G. We will see how to do this below and we will find that inline image is a rational function of the labels. If inline image is any steady state of the dynamics then, because the kernel is one-dimensional, inline image, where λ is some scalar, which represents the remaining generalized degree of freedom. It follows from the conservation law in Eqn (8) that inline image, where inline image, so that:

display math(10)

In other words, the steady-state value at any vertex can be eliminated in favour of the expression inline image and the conserved quantity inline image. Since inline image is a rational function of the labels, so too is the fraction inline image.

The examples discussed above can all be derived from Eqn (10). In other words, underlying each example is a labelled, directed graph of the form considered here (Fig. 2). This graph may not be explicitly described in the corresponding papers but it can be constructed. In each case, the graph is strongly connected and satisfies the uncoupling condition. Equation (10) expresses the eliminated quantities as rational functions of the labels and the conserved total and this is where the rational functions in the examples all come from.

Equations  (9)(10) are surprisingly powerful. A strongly connected graph can be arbitrarily complex and have an arbitrary number of vertices; nevertheless, any steady state has only one degree of freedom. The potential reduction in complexity is unbounded, making the elimination procedure in Eqn (10) of particular significance with respect to tackling the enormous combinatorial complexity that we encounter at the molecular level. This is evident in post-translational modification, where the number of modification states may be astronomical. Nevertheless, at steady state, the effective algebraic complexity depends only on the number of enzymes, as in Eqn (3).

The two equations in Eqn (3) are derived from the conservation laws for the forward and the reverse enzyme in the post-translational modification system. These equations are highly nonlinear and are an aspect of the ‘linear’ framework that we have glossed over up to now. The labels in the graph may contain concentrations of components that are not represented by vertices in the graph. How these labels are dealt with depends on the context. In enzyme kinetics, the labels contain the concentrations of slow components. Quantities of interest, such as reaction rates, can be calculated from the eliminated components in terms of these slow components and nothing further is required [see below, for the calculation of a rate formula in Eqn (16)]. In ligand binding contexts, such as allosteric proteins or gene regulation, the ligand is itself a fast component, although not one that is represented by a vertex in the graph. The equilibrium value for the free ligand concentration can be calculated from the conservation law for the total amount of ligand. If ligand is in abundance, so that depletion can be ignored, then inline image; if not, the conservation equation must be solved to obtain the equilibrium value of [L]. In post-translational modification, the labels contain the concentrations of free enzymes that are also fast components. Here too, the steady-state values of these are specified by the conservation laws for the enzymes, as in Eqn (3). It is the nonlinearity of these equations that permits multiple solutions and the existence of multiple steady states [66, 84].

We see from this that the linear framework is not entirely linear. However, the linearity is perhaps its most surprising aspect. Linearity is largely invisible in the examples discussed above. One never thinks, for example, of Michaelis and Menten's calculation as being linear. In retrospect, the linearity is crucial. It is what makes elimination of internal complexity feasible.

Calculating kernels

It remains to calculate a standard basis element inline image. On the face of it, this should be an exercise in linear algebra. However, this relies on determinants, which are sums of terms with alternating signs. Miraculously, positive and negative terms somehow cancel out, leaving only a sum of positive terms. This is the origin of the positivity that was noted above. This miracle turns out to be a special property of the Laplacian matrix. It is easier to use two other procedures for calculating ρ, in which the positivity is immediately manifest without the need for any cancellations.

The simplest way to calculate ρ is when thermodynamic equilibrium can be assumed. In this case, the graph must have a particular structure which comes from the principle of detailed balance (DB). This states that if a chemical reaction is at thermodynamic equilibrium, then it must be reversible and the pair of reversible reactions must be at equilibrium independently of any other reactions in which the substrates and products are participating. DB was put forward in the chemical context by Gilbert Lewis to rule out thermodynamically paradoxical states in chemical reaction networks [85]. He made the following charming argument based on chemical intuition and time-scale separation. He imagined that a reaction within a network had a catalyst increasing its rate, and the rate of its reverse reaction, so much that this reaction was working much faster than others in the network. The reversible reaction would then be at equilibrium independently of the rest of the network. Not surprisingly, this argument did not convince the physicists! DB was subsequently shown to be a consequence of ‘microscopic reversibility’, or time-reversal symmetry, in the laws of physics [86] and must be considered as a fundamental physical principle. It imposes restrictions on the biochemical rate constants, as we will see below, and many remarkable conclusions have been drawn by failing to enforce it at equilibrium.

If a graph G represents a context in which thermodynamic equilibrium is reached, then each edge must be reversible. If we consider any pair of reversible edges:

display math(11)

in an equilibrium state inline image, then, according to DB, inline image. The equilibrium state can then be calculated by choosing a reference vertex, say vertex 1, and choosing any path of reversible edges from 1 to i:

display math

Applying DB repeatedly to each reversible pair, we find that:

display math(12)

There may, of course, be more than one path of reversible edges from 1 to i. It is a consequence of DB that, no matter which path is used, we obtain the same answer for inline image. It is necessary and sufficient for DB to hold for any equilibrium state that the graph G satisfies the 'cycle condition': given any cycle of reversible edges, the product of the labels going clockwise around the cycle must equal the product of the labels going counterclockwise [68]. It is not difficult to see that this implies that the calculation in Eqn (12) is independent of the chosen path. The cycle condition is a fundamental constraint on the labels and, through them, on the underlying biochemical rate constants. Figure 2B,C,E shows examples, which are treated at equilibrium and must satisfy the cycle condition, whereas the examples in Fig. 2(A,D), which are dissipative, would not have to.

If we let inline image be the expression given by the product of the terms in brackets in Eqn (11) then inline image. (Note that inline image.) Because inline image is a common factor in each of these expressions, we can obtain a standard element in inline image by setting inline image. Here, the positivity is obvious.

The thermodynamic formalism is essentially an additive version of this calculation. The connection between the two methods comes through van't Hoff's law, which says that, for a reversible reaction such as that in Eqn (11) which can reach equilibrium:

display math(13)

where ΔU is the free energy difference between the two vertices. The quantity inline image, which is a product of terms of the form (a/b) in Eqn (12), corresponds through the exponential in Eqn  (13) to a sum of free energies. At equilibrium, the thermodynamic formalism and the linear framework are equivalent and it is a matter of preference whether one calculates multiplicatively like a chemist would using reactions or additively like a physicist would using free energies [68].

It is another matter if the system does not reach thermodynamic equilibrium but, instead, reaches a steady state that is far from equilibrium. There is then no requirement for DB to hold and we can obtain graphs with irreversible edges. Equilibrium thermodynamics cannot help us.

When G is strongly connected, a standard basis element inline image can be calculated by using the matrix-tree theorem (MTT) [67, 68, 83]: inline image is given by taking a spanning tree rooted at i, multiplying together the labels on its edges and then adding such expressions over all spanning trees rooted at i [68].

A spanning tree is a subgraph of G that contains every vertex in G (spanning) and has no cycles when edge directions are ignored (tree). It is rooted at i if i is the only vertex with no outgoing edges. A graph is strongly connected if, and only if, every vertex has a rooted spanning tree [[83], lemma 1]. If we let inline image denote the set of spanning trees of G rooted at i, then, in symbols, the MTT implies that:

display math(14)

Note that all the terms are positive. The MTT takes care of all the cancellations [[83], appendix]. For the reversible Michaelis–Menten scheme in Eqn (5), it is easy to see that there is only one spanning tree at each vertex and that, according to Eqn  (14),

display math(15)

Applying the elimination formula in Eqn (10), we find that, for any steady state inline image:

math image

Once a steady state has been reached, the rate of the reaction is given by:

display math

from which one obtains by substitution the reversible Michaelis–Menten rate formula:

display math(16)

in which inline image, inline image, inline image and inline image. The fast components, E and ES, have been eliminated, leaving only the slow components S and P and the rational function in Eqn (15).

The reversible Michaelis–Menten scheme is sufficiently simple that the steady state can be calculated without using the MTT but the MTT becomes invaluable as the graph becomes more complicated. It is, however, more demanding to apply than the equilibrium formula for inline image given by Eqn (12). At equilibrium, only a single path to i is needed to calculate inline image; away from equilibrium all spanning trees rooted at i are needed to calculate inline image. The number of spanning trees increases rapidly as the graph becomes more complicated. Computational methods for enumerating them are sometimes helpful but the main importance of the MTT is that it gives a mathematical description of the steady state, which is then accessible to further analysis.

Theorems of matrix-tree type go back to Kirchhoff but the MTT that is relevant to us, for labelled, directed graphs, is due to Bill Tutte, one of the founders of modern graph theory [87]. It has been independently re-discovered many times. Enzymologists will recognize the MTT as equivalent to King and Altman's ‘schematic procedure’ [6], which remains the standard method for calculating enzyme rate formulas. Here, spanning trees are called ‘patterns’. Somewhat later, Terrell Hill re-discovered the MTT in his studies of non-equilibrium molecular systems [88]. He called spanning trees ‘directional diagrams’. Amusingly, he thanked a correspondent for pointing that he was making a contribution to the ‘theory of graphs’. Neither he nor his correspondent appeared to be aware that the result had already been proved by one of the founders of that subject almost 20 years earlier, let alone that it was also known to enzymologists. The MTT has also been independently re-discovered in economics, engineering, computer science and theoretical physics [83], which is testament to its fundamental importance and to the insidious compartmentalization of modern science.

There is much to be gained by adopting the neutral language of mathematics, of graphs, Laplacians and spanning trees, in preference to the specialized language that might be used in a particular area of application. A common mathematical structure emerges that cuts across disciplines and it becomes much easier to transfer knowledge between them and to avoid re-inventing the wheel.

Looking forward

One of the advantages of doing chemistry, with reactions and graphs, in contrast to physics with free energies, is that one is not limited to equilibrium. The linear framework is applicable in contexts such as enzyme kinetics or post-translational modification where the system is far from equilibrium. It can accommodate some of the dissipative mechanisms, such as nucleosome reorganization and histone PTM, which arise in eukaryotic gene regulation, offering a new methodology that goes beyond the thermodynamic formalism. Sequences were sufficient to describe genomes and an elaborate mathematical and computational machinery has evolved to exploit them. Perhaps labelled, directed graphs will provide the appropriate mathematical object with which to describe genomic functionality. This is work in progress.

A more immediate problem leads us back to Michaelis and Menten. As pointed out above, the irreversible scheme in Eqn (1), is commonly used to study multi-enzyme systems in circumstances that would have horrified its authors. The dangers of doing this have been pointed out [9, 10]. It is not only dangerous, but also illegal. If P is appreciably present (which it was not for Michaelis and Menten), then it must be able to rebind to E, or DB would be violated once the reaction reaches equilibrium. The usual reason given for ignoring thermodynamic reality is that the scheme is being used to represent a reaction that is irreversible under physiological circumstances, such as a kinase or phosphatase reaction. If so, a better solution would be to use the reaction scheme:

display math(17)

which can be irreversible without being in a state of Original Thermodynamic Sin. There is, of course, a price to pay for such virtue, which is increased complexity and yet more rate constants whose values we do not know.

A further issue arises with Eqn (1) because it assumes single-substrate reactions. Yet, kinase reactions, which are frequently represented by Eqn (1), involve two substrates. It may be reasonable to ignore ATP as a dynamical variable, if, indeed, its concentration is kept constant by background cellular processes, although that does not address the order of substrate binding, which can give rise to additional intermediate complexes. It might be more appropriate to use a reaction scheme for which the corresponding graph looks something like that shown in Fig. 2(a).

The linear framework provides a systematic approach to this problem, at least for contexts such as post-translational modification. We can consider a limited enzymological grammar, consisting of the reactions:

display math(18)

which allows for multiple intermediates, inline image. This accommodates Eqns ((1),(17)) and the reaction scheme behind Fig. 2(a), and covers most of the enzymology that we would expect to find for post-translational modification and demodification involving metabolic modifications [81]. The linear framework shows that, no matter how complex the reaction scheme that is derived from Eqn (18), the steady state behaviour can be summarized in four aggregated parameters, a ‘total generalized catalytic efficiency’ (tgCE) and a ‘total generalized Michaelis–Menten constant’ (tgMMC), one each for the reaction substrate S and one each for the reaction product P. In this way, we can distinguish between a reaction scheme that is ‘irreversible’, such as Eqn (17), in which the tgCE for P is zero (it cannot make S) but the tgMMC for P is not (it can still bind to E). By contrast, the Michaelis–Menten scheme in Eqn (1) is ‘strongly irreversible’, with both the tgCE and tgMMC for P being zero.

The distinction makes a difference. In one of the pioneering papers on ‘futile cycles’ of phosphorylation and dephosphorylation, Albert Goldbeter and Dan Koshland showed that a single-site cycle is capable of unlimited ultrasensitivity as the enzymes become more saturated by the substrate [64]. They assumed that the kinase and phosphatase followed the standard Michaelis–Menten scheme in Eqn (1). One can show, using the formulation just described, that, if the forward and reverse enzymes can be expressed in the grammar of Eqn (18) and are both strongly irreversible, then, no matter how complicated they are, unlimited ultrasensitivity continues to hold [81]. However, if the enzymes are weakly irreversible (irreversible but not strongly so; i.e. their tgMMCs for product are nonzero), then the ultrasensitivity is always bounded and one can even calculate a bound for it (T. Dasgupta, D.H. Croll, J.A. Owen, M.G. Vander Heiden, J.W. Locasale, U. Alon, L.C. Cantley & J. Gunawardena, unpublished results).

Strong irreversibility is mathematically convenient because there are only two aggregated parameters to deal with per reaction, although it introduces an artefact, a singularity, which leads to infinite behaviour in the high-substrate limit [81]. Weak irreversibility provides a more physiologically realistic and a more nuanced picture, in which the properties of the individual enzymes become significant. One can apply the mathematics with more confidence to actual enzymes with complicated enzymology.

This brings us to the main reason why the original Michaelis–Menten scheme continues to exert such a hold, despite its limitations being known [9, 10]. It allows us to pretend that all enzymes are the same, so that we can avoid paying attention to them in our rush to understand the behaviour of multi-enzyme systems [11]. This does an injustice to Michaelis and Menten, which is compounded by our continuing to use their ideas in contexts that they would have known to be wrong. It is also a disservice to the enzymologists of the intervening century, who have done so much to disentangle the mechanisms of individual enzymes [89]. And, not least, it is not a good way to educate the next generation. It may help to have a systematic method to deal with the additional complexity, which the linear framework now provides, although scientific cultures have a lot of inertia and it may be the next generation of scientists, indeed, who fully integrate enzymology and systems biology.

Time-scale separation may have been a convenient tool for Michaelis and Menten, who were dealing with a system of just four components, but it assumes much greater significance for us, as we contemplate molecular networks in which an individual protein may have thousands of different states of modification. Without some means to rise above this complexity, it will be extremely difficult to see the wood for the trees. Time-scale separation offers a way to do this, which may at least provide a starting point for thinking about a system and developing our intuitions, even if we do not know how well the time scales are separated in reality.

What the linear framework shows us is that, sometimes, and in the most significant examples of time-scale separation, we can undertake the elimination of internal complexity in a purely mathematical way, which depends only on the structure of the system: its graph. We can rise above some of the complexity. We can show, for example, that futile cycles have certain behaviours, irrespective of the complexity of their enzymes [81]. That is a different kind of assertion to what we normally see in the literature. It is also considerably more powerful than what can be achieved by numerical simulation, in which all the details must be precisely specified. We can begin to discern in these developments, perhaps, a mathematical language through which biological principles can emerge from molecular complexity. I like to think that Michaelis and Menten would have approved.

Acknowledgements

I thank two anonymous reviewers for several helpful comments. The work outlined here was supported by NIH GM081578 and by NSF 0856285.

Ancillary

Advertisement