SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

Kampmann C. E. (1996) Feedback loop gains and system behavior

Proceedings of the 1996 International System Dynamics Conference. System Dynamics Society, Boston, MA, USA

In the past two decades, there has been a steady growth of analytical tools that can help the system dynamicist understand and test the behavior of large models. Without such tools, we must resort to simulation experiments and intuition developed from simple, low-order, systems. While this traditional approach has served us well, there is always the danger that we overlook important mechanisms in larger systems or falsely attribute behavior patterns to particular structures.

Among these new analytical tools, eigenvalue elasticity analysis (EEA) has received the most attention, perhaps because it rests on a consistent and comprehensive theoretical foundation. The approach involves decomposing system outcomes into characteristic behavior modes, each characterized by an eigenvalue of the linearized system matrix, and then examining how each eigenvalue or behavior mode is affected by small changes in the parameters of the system. The eigenvalue elasticity of a parameter change is a dimensionless number, defined as the fractional change in the eigenvalue resulting from a fractional infinitesimal change in that parameter. Parameters or structural links or loops in the model that have large elasticities are then interpreted as playing a dominant role in that behavior mode.

EEA was first used by Nathan Forrester in his doctoral dissertation from 1982. However, it did not receive much attention in the field until the 1996 International System Dynamics Conference, where Christian Kampmann presented a rigorous analysis of the topology of feedback loop structures and how eigenvalue elasticities are related to the strength of individual feedback loops. Since then, an entire strand of research has been devoted to the development and application of EEA, where Kampmann's conference paper remains a seminal reference. Yet until now, the paper has never been published, and researchers have had to rely on private circulation of various drafts of the paper. With this archival publication, we wish to remedy this deficiency in the hope that the work will inspire others to continue developing EEA and further the high standards of analytical rigor that Kampmann's paper represents.

Abstract

Linking feedback loops and system behavior is part of the foundation of system dynamics, yet the lack of formal tools has so far prevented a systematic application of the concept, except for very simple systems. Having such tools at their disposal would be a great help to analysts in understanding large, complicated simulation models. The paper applies tools from graph theory formally linking individual feedback loop strengths to the system eigenvalues.

The significance of a link or a loop gain and an eigenvalue can be expressed in the eigenvalue elasticity, i.e., the relative change of an eigenvalue resulting from a relative change in the gain. The elasticities of individual links and loops may be found through simple matrix operations on the linearized system.

Even though the number of feedback loops can grow rapidly with system size, reaching astronomical proportions even for modest systems, a central result of the paper is that one may restrict attention to an independent subset which typically grows only linearly and at most as the square of system size. An algorithm for finding an independent loop set is presented, along with suggestions for how to augment it to select loops with large elasticities.

For illustration, the method is applied to a well-known system: the simple long-wave model. Because this model exhibits highly nonlinear behavior, it sheds light on the usefulness of linear methods to nonlinear system. The analysis leads to a more thorough and deeper understanding of the system and sheds new light on conventional wisdom regarding the role of many of the system's feedback loops. Copyright © 2012 System Dynamics Society


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

The method of system dynamics has relied extensively on feedback loops to explain how system structure leads to patterns of behavior. Yet formal methods have largely been restricted to simple classroom examples as guides to intuition. In practice, large-scale models with many loops are still analyzed in a largely informal way, using trial-and-error simulation. Although this is not necessarily a weakness (see, for example, the arguments in Forrester and Senge, 1980), any formal tool that might help identify important structures in the model as they affect a particular mode of behavior could be of enormous utility, particularly in large models.

Nathan Forrester (1982, 1983) suggested a method for relating the strengths (gains) of individual feedback loops to the system eigenvalues, which, in linear systems at least, capture most of what we mean by “patterns of behavior”. The specific measure is the “eigenvalue elasticity”, i.e., the relative change in an eigenvalue resulting from a relative change of the loop gain. A large elasticity would indicate that the loop is in some sense “important” in generating the behavior mode associated with that eigenvalue.

So far, however, these ideas have not been developed further or applied to specific models. The purpose of the present paper is to make some further headway into the theory of interacting feedback loops by applying tools from network theory and discrete mathematics, to the point where this theory can be implemented as computer algorithms.

Although I am optimistic about its prospects, it remains to be seen how useful the theory is by testing its use in practice. In fact, it is still an open question whether the feedback loop concept is useful in large-scale models: outlining all feedback loops in a given system is a very inefficient and redundant way of describing it, so it is possible that explanations in terms of loops will be similarly cumbersome. Furthermore, the assumption that eigenvalues capture behavior patterns is only valid in linear systems; whether linear concepts may still prove useful in highlighting shifting dominant structures in nonlinear systems can only be answered through actually applying the tools in practice.i

In the next section I present the formalism of graph theory and show how it may be applied to system dynamics models. I relate a number of results which, from a graph-theoretical point of view, are mainly well-established facts but may yield new insight when applied to dynamical systems.

I further investigate the potential number of feedback loops as a function of system size. Since the potential number is very large—growing more than factorially—I focus on the connections between loops and show how only certain subsets of loops can be considered independent, in the sense that their strengths can be separately determined. I prove the surprisingly simple fact that a feedback system with N links and n variables contains exactly N − n + 1 independent loops (assuming it is “strongly connected”—see below). This implies that the independent loop set typically grows linearly with system size—a substantial reduction in complexity. The proof of this result also yields a direct algorithm to identify an independent set.

In the following section, I explore the relationship between loop gains and eigenvalues. First, I relate how the characteristic polynomial may be interpreted in terms of feedback loop gains, i.e., the eigenvalues are in a sense determined exclusively by the gains of all loops in the system. I also discuss how systems may be partitioned into “strongly connected” components, and how each of these components separately determines a subset of the eigenvalues.

I then consider how eigenvalues change as the strengths of individual links or loops in the system are altered. In particular, I show that the eigenvalue elasticities of links in the system are similar to a current in an electrical network, indicating that perhaps further exploration of the field of network theory could yield new tools and insights. Some of these results were previously mentioned (but not proved) by Nathan Forrester (1983), but I point out how his method needs to be modified to take account of the interdependencies between loops. I further demonstrate how link and loop elasticities can be found through simple linear calculations, once the eigenvalues and eigenvectors have been found through one of the many standard computer packages available (see, for example, Press et al., 1988).

In the following section, I then apply the results to a specific case: John Sterman's simple long-wave economic model. Because this model exhibits highly nonlinear behavior, it sheds light on the usefulness of linear methods to a nonlinear system. The analysis leads to a deeper understanding of the system and sheds new light on conventional wisdom regarding the role of many of the system's feedback loops.

In the conclusion, I focus on further steps ahead and suggest possible new directions for research.

Graph theory and dynamical systems

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

The main methodological novelty of the present article is the use of graph theory to analyze system feedback structures. I begin this section with an introduction to the formalism.

Graph representation of the system

A graph inline image is a very simple mathematical object, consisting of a set of points or nodes inline image which are joined by a set of links or edges inline image, i.e., inline image. One may also speak of subgraphs, inline image, which are subsets of nodes and/or edges in inline image. (For the most part, it will be clear whether one is focusing on edges or nodes or both, and I will therefore be somewhat informal in my notation.)

The mathematical field of graph theory is concerned with the topology of such objects, usually abstracting from the spacial location (the embedding) of the graph as well as the exact nature of the links, though one may extend the concept by attaching properties, such as weights or capacities, to individual elements of the graph. In particular, a directed graph or digraph arises when each edge is assigned a particular direction, indicated by an arrow. A directed edge is frequently called an arc. In the following, unless explicitly stated otherwise, I consider only directed graphs and edges. A directed path in a digraph is a series of disjoint (distinct) connected nodes and arcs p1[RIGHTWARDS ARROW]p2[RIGHTWARDS ARROW] … [RIGHTWARDS ARROW]pq. A directed cycle arises when pq = p1 but the other nodes are disjoint. An arc that connects a node to itself, p[RIGHTWARDS ARROW]p, is called a self-loop. One may also construct undirected cycles, paths, etc., where the direction of the constituent edges may be arbitrary.

If we number all the edges in inline image e1, e2, …, eN, we may identify a given subgraph inline image (e.g., a cycle or path) by an incidence vector inline image, where vi = 1 if inline image and 0 otherwise.1 Combining rows of incidence vectors for several subgraphs produces a corresponding incidence matrix. In particular, the matrix of incidence vectors for all (directed) cycles in a graph is called the (directed) cycle matrix.

The formal structure of a system dynamics model with state variables (levels) x1x2, …, xn and auxiliary variables y1, y2, …, yp can be written as the coupled differential equationsii:

  • display math

or, in vector notationiii,

  • display math(1)

If the model has no exogenous variables, time t does not enter the vector functions f, g. Henceforth we shall assume that this is the case and, for notational convenience, drop the time parameter. We may further eliminate the auxiliary variables from the equations to obtain what we shall call the reduced form of the model,

  • display math(2)

In either case, we may linearize the system around an operating point x0, y0 to get the linear system

  • display math(3)

or, in reduced form,

  • display math(4)

where the tilde represents the deviation from the operating point inline image, etc., the matrices represent partial derivatives taken at the operating point, and b = f(x0,y0). The matrix of partial derivatives inline image is known as the Jacobian of the system.

image

Figure 1. Illustration of procedure to find an independent cycle set

Download figure to PowerPoint

We may now represent the system structure as a digraph, where the variables are the nodes P = X ∪ Y and the edges E are the relationships between variables; i.e., we include an edge u[RIGHTWARDS ARROW]v whenever the variable u is an argument in the equation for the variable v. The representation is familiar to system dynamics practitioners as a causal loop diagram (see Figure 2 for an example). We will refer to the graph of the systems (3) and (4) as the full and reduced graph, respectively.

image

Figure 2. Example of simple dynamic system

Download figure to PowerPoint

In addition to a direction, we will assign a numerical property called the strength or gain, G(e), of the edge e = u[RIGHTWARDS ARROW]v, defined as the partial derivative ∂v/∂u for the corresponding system variables. We see that each edge corresponds to a non-zero element in one of the matrices in Eq. (3) or (4) and that its gain is the value of that element.

A well-known weakness in causal loop diagrams is their inability to distinguish stock and flow variables. The same problem occurs here, as it is unclear how to define the gain of the link from a flow inline image to its associated state variable x. In the present analysis, the gain will simply be set to unity, inline image, but a more correct method would be to use the control-theory notion of a transfer function and the LaPlace transform frequency s, i.e., inline image. We will return to this issue in the discussion at the end of the paper.

A feedback loop in the system is now equivalent to a directed cycle in the corresponding graph, and we shall use these terms interchangeably in the following. A familiar condition of system dynamics models is that there cannot be a simultaneity between auxiliary variables. In other words, every directed cycle must contain at least one state variable, and we define the order of a feedback loop as the number of state variables it contains. (An isolated node may be considered a cycle of order 0.) We shall call a feedback loop in the reduced graph a reduced feedback loop, while loops in the full system will be referred to as non-reduced. Note that self-loops only occur in the reduced graph.

A strongly connected digraph inline image is one in which, for any pair of nodes inline image, there is both a directed path from x to y and a directed path from y to x. Evidently, the nodes of any graph may be partitioned into a set of disjoint strongly connected components (or strong components) inline image, each of which is strongly connected, but where no pair of nodes from separate components are strongly connected. It is also evident that any feedback loop in the system must involve nodes from the same strong component.

We shall define gain G(c) of a cycle c, and, more generally, the gain of any subgraph inline image as the product of the gains of its constituent edges e, i.e.,

  • display math

The number of feedback loopsiv

How many feedback loops does a given system have? Unfortunately, there are no general formulas for finding the number of loops for a given system, short of direct enumeration. We can, however, specify an upper bound on the number of loops by considering the maximally connected system, i.e., one in which every flow inline image depends upon every state variable x and every auxiliary variable y, and every y depends upon every x and “as many as possible” of the other auxiliary variables y. I show in the Appendix in the electronic supplement to this article (Lemmas A1 and A2) that the number of loops Sn,p in a maximally connected system with n state variables and p auxiliary variables grows as (n − 1) ! (n + 1)p, i.e., exponentially in p and faster than factorially in n. This is a very large number, even for relatively modest systems, as shown in Table 1.

Table 1. The number of loops Sn, p in a maximally connected system with n state variables and p auxiliary variables
np
0151030
112321024109
237307610971014
382328731061018
42488285281071021
5894143034441091024
10106107101110161036
3010311032103810461075

One might ask whether realistic models, which clearly have far fewer links, may yet contain a manageable number of loops.2 While the number is certainly much smaller than Sn,p, it is still substantial—large enough to pose problems for the practitioner. For instance, the World3 model (Forrester, 1971) has five state variables, 44 auxiliaries and 76 links between variables. The model contains 81 non-reduced loops and 41 reduced loops. Nathan Forrester's macroeconomic model (1982), in its basic version, has eight endogenous state variables, 18 auxiliary variables and 44 links. The model contains 39 non-reduced and 118 reduced loops.3

Clearly, even typical system dynamics models contain a large number of loops, and it is therefore necessary to consider the dependencies among these loops to arrive at a manageable number.

Dependent and independent loops

We now proceed to examine the interdependencies among feedback loops by asking the question: Under what circumstances can a set of loops have their strengths determined or changed independently by an appropriate assignment or change in the strengths of individual links in the system? Consider a loop c consisting of the edges e1, e2, …, eq. Then the loop gain g = G(c) is related to the edge gains hj = G(ej) byv

  • display math(5)
  • display math(6)

If we number the edges from 1 to N and the loops or cycles from 1 to L, we can define the (column) gain vectors inline image and inline image and we see that

  • display math

where the L × N matrix C is the directed cycle matrix (see above).

To realize a set of loop gains or changes in gains thus requires solving Eq. (5) or (6) for ln|h| or dln|h|, respectively. In both cases, the coefficient matrix in the equation system is C, and the existence of solutions thus depends upon the rank ρ of this matrix. Since, as has been shown elsewhere in the paper, L is often much larger than N, implying that ρ < L, the system will generally not have a solution. Only if the left-hand side happens to be a linear combination of the rows in C does a solution exist—in which case there will be an (N − ρ) -dimensional infinity of solutions.

The above result implies that it does not make sense to speak of changing loop gains independently, and hence to speak of individual loop contributions, for all loops in the system. Instead, one must focus on a subset of loops, corresponding to selected rows Cr of C which has full row rank. This leads to the following important definition.

Definition. An independent loop set inline image of a digraph is a maximal set of loops whose incidence vectors are linearly independent; i.e., every other loop is linearly dependent upon the loops in inline image. We refer to the corresponding subset of the cycle matrix C as the reduced cycle matrix Cr.

Clearly, the size of the independent loop set is equal to the rank of the full cycle matrix. Moreover, the set is rarely unique. Before proceeding, however, we consider the signs of the gains as well. Even though the numerical gains of an independent loop set can be assigned or changed by an appropriate assignment or change in edge gains, the signs of the loop gains cannot always be determined independently. The independence of signs is determined by the binary rank ρ2 of Cr. In the electronic supplement, I show that ρ2 ≤ ρ and provide an example of an independent loop set where ρ2 < ρ. However, the procedure to select an independent loop set used in the proof of Theorem 1 (see below) also produces an independent loop set whose gains can be assigned arbitrary signs.

It turns out that the size of the independent loop set is a surprisingly simple function of the structure of the system's graph, as stated in the following.

Theorem 1. In a strongly connected graphinline image with N arcs and n nodes, the rank of the directed cycle matrix C and, consequently, the number of rows in the reduced cycle matrix Cr is N − n + 1.

Proof. The proof, suggested to me by Professor Carsten Thomassen (Institute of Mathematics, DTU), is a more elegant version of an earlier proof developed by myself. Furthermore, it provides a direct algorithm to identify an independent loop set. First, we note the well-known result from graph theory that the cycle matrix of non-directed cycles in a connected graph has rank N − n + 1 (Nielsen, 1995). Hence there can be no more than N − n + 1 linearly independent directed cycles and we need to show that one can in fact find N − n + 1 independent cycles in a strongly connected graph. The proof now proceeds by direct construction of a set of independent cycles (see Figure 1):

  1. Find a directed cycle c1 and let k1 ∉ c1 be an arc from some node p1 ∈ c1 to a node q1 (which may be part of c1 and could even be identical to p1 if k1 is a self-loop). Let e1 be the edge in c1 that ends in p1. We note that the cycle with this edge removed, c1 − e1, connects all nodes in c1 yet contains no cycles, i.e., it is a spanning tree for c1.4
  2. Choose a shortest path v1 from q1 to c1, ending in the point r1 ∈ c1. (This implies that only the last node r1 in v1 is in c1. Let c2 be the directed cycle formed by k1, v1, and the path r1[RIGHTWARDS ARROW]p1 on c1. Let e2 be the last edge on k1 ∪ v1 (ending in r1). We note that c1 ∪ c2 − {e1,e2} connects all points in c1 ∪ c2, yet contains no cycles, i.e., it is a spanning tree for c1 ∪ c2.
  3. Let k2 ∉ c1 ∪ c2 be an edge from a point p2 ∈ c1 ∪ c2 to a point q2, which again may be part of c1 ∪ c2 and could be identical to p2.
  4. Repeat steps 2 and 3 (with c1 replaced by c1 ∪ c2 etc.), until all edges in G are part of c1 ∪ c2 ∪ … ∪ cr.

Now inline image, which has N − r edges, constitutes a spanning tree in G. We know from graph theory that a spanning tree in a connected graph with n nodes has n - 1 edges, so we must have that the number of loops r = N − n + 1. Moreover, since each of the cycles cj contains at least one edge kj that is not in any of the previous cycles c1, … cj − 1, they must be linearly independent.vi

The theorem leads to the following corollary for digraphs that are not strongly connected (provided without proof, as the result is straightforward):

Corollary 1. The directed cycle matrix of a digraph G with N edges, n nodes, s strongly connected components, and q edges that are not in a strong component has rank N − q − n + s.

The significance of Theorem 1 is that, even though the total number of loops in the system may grow very quickly with the size of the system, the independent set only grows linearly with the number of links. Since the number of links can grow at most as the square of the number of variables, this is a substantial reduction. Moreover, in actual models, the average number of links per variable is roughly constant, so that the growth is typically linear.

Another way to interpret the result is that a complete list of loops and their associated gains is a rather inefficient way of describing the system. The description is both inefficient and incomplete, since the edge gains are only determined up to an (N-n)-dimensional infinity. Instead, one must choose a particular loop description of the system, in the form of an independent set, where other loops are simply a different way of describing the same system.vii

Loop gains and eigenvalues

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

In this section, I turn to the relationship between the system eigenvalues and the strengths of individual links and loops.

Decomposition of the characteristic polynomial

The eigenvalues, λ, are determined as the roots of the characteristic polynomial, P(λ), and it turns out that the coefficients of this polynomial can be expressed in terms of the gains of the feedback loops in the system. The following concept will prove useful in this context: A cycle decomposition inline image of a graph inline image is a collection of node-disjoint cycles (which may include isolated points) that covers all nodes in inline image. The order of a decomposition is the sum of the order of its constituent cycles. The size, inline image, is the number (excluding isolated points) of cycles in inline image.

The following theorem states how the characteristic polynomial of the system matrix may be decomposed into terms that correspond to cycle decompositions in the corresponding graph.

Theorem 2. Let J be the n × n Jacobian matrix for a system with graph representation inline image, which may be the reduced or the non-reduced version, and let P(λ) be the corresponding characteristic polynomial:

  • display math

where I is the identity matrix or order n. Then

  • display math

where the summation is to be taken over all possible cycle compositions inline image of order i.

The theorem is a consequence of the graph-theoretic result known as Mason's rule, which arises from the fact that all the permutations involved in calculating a determinant can be decomposed into cycles. The proof, which is cumbersome, is omitted. (The interested reader may consult Reinschke, 1988.)Instead, we illustrate the working of the theorem in an example.

The system in Figure 2 contains the state variables inline image and the auxiliary variables inline image with the linearized equations of motion

  • display math

and the Jacobian

  • display math

The system contains the following loops:

Cycle OrderGain
1x1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x11g1 = b12c21
2x2[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]x21g2 = a12b21
3x1[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]x2[RIGHTWARDS ARROW]x12g3 = a12b21c11
4x1[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x11g4 = b12c11d21
5x1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x2[RIGHTWARDS ARROW]x12g5 = a12b22c21
6x2[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x21g6 = a12b22d21
7x1[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x2[RIGHTWARDS ARROW]x12g7 = a12b22c11d21

and the following cycle decompositions:

Cycle decomposition inline imageOrderinline imageGain inline image
{x1,x2,y1,y2}001
{x1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x1, x2, y1}11g1
inline image11g2
{x1[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x1, x2}11g4
{x2[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x2, x1}11g6
{x1[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]x2[RIGHTWARDS ARROW]x1, y2}21g3
{x1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x2[RIGHTWARDS ARROW]x1, y1}21g5
{x1[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x2[RIGHTWARDS ARROW]x1}21g7
{x1[RIGHTWARDS ARROW]y2[RIGHTWARDS ARROW]x1, x2[RIGHTWARDS ARROW]y1[RIGHTWARDS ARROW]x2}22g1g2

The characteristic polynomial is

  • display math

where

  • display math

We see that p1 is the sum of the weights of the four cycle decompositions of order 1, all with a negative sign since they all contain an odd number of loops; and p2 is the sum of the four second-order decompositions, where the one that consists of two first-order loops enters with a plus sign while the others, consisting of a single second-order loop, enter with minus. One might interpret the coefficient to λ2 as the weight 1 of the single zeroth-order decomposition.

In most cases, the graph of a system dynamics model will be strongly connected, since the endogenous point of view tends to lead to models in which every pair of state variables are connected in some feedback loop. If this is not the case, one may decompose the system into strongly connected components, where there will be a unidirectional flow of causality between components, similar to the unidirectional causal flow from levels to auxiliaries and rates.

One consequence of Theorem 2 is that, since loops by definition cannot exist between strong components, only the links in the strong components influence the eigenvalues of the system. The following theorem further states that the strong components of the system independently determine corresponding subsets of the eigenvalues.viii

Theorem 3. Given a dynamical system whose corresponding digraph inline image has the connected components inline image, let ni be the number of state variables in inline image. Then the characteristic polynomial P(λ) for the eigenvalues λ has the form

  • display math

where {gi denotes the loop gains in the component inline image and Pi is an nith order polynomial in λ.

Proof. By an appropriate ordering of the state variables, we can get a Jacobian matrix with an upper block-diagonal form, so that the characteristic polynomial is

  • display math

since the determinant of an upper block-diagonal matrix is the product of the determinants of the diagonal submatrices. Furthermore, we may apply Theorem 2 to each of the individual factors det(λI − Jii).□

Eigenvalue elasticities

Nathan Forrester (1982, 1983) suggested a possible measure of the importance of a loop or link in the system in the form of the eigenvalue elasticity, i.e., the relative change in the eigenvalue from a relative change in the loop or link gain

  • display math

If λ is complex, ε will also be a complex number. One may choose to express λ in polar coordinates

  • display math

where r = |λ| is often called the “natural frequency” and cosθ is known as the “damping ratio”. As noted by Nathan Forrester (1982), the real part of the elasticity measures the change in the natural frequency, while the imaginary part measures the change in the damping ratio. Specifically,

  • display math

In the analysis later in the paper, I will instead measure the change in the real and imaginary part of λ separately, choosing the following measures

  • display math

The two sets of measures are related by simple linear transformations. What measure one chooses to adopt depends upon the larger purpose of the modeling, as discussed extensively in Forrester (1982). While there is no single superior measure, it is also clear that one would attribute importance to those loops that have large absolute values of the elasticities.

In the case of a link, the elasticity is unequivocally defined (see below). Since the characteristic polynomial P(λ) can be expressed in terms of the gains gi of all loops ci in the system, one could, as suggested in Forrester (1983), “define” the change in eigenvalues directly to changes in individual loop gains by implicit differentiation of the equation P(λ,gi) = 0, i.e.,

  • display math

However, the formalism is only valid if it is in fact possible, by an appropriate change to the gains of individual links in the system, to change the gains of one loop independently of all others. As we saw in the previous section, this turns out not to be the case. Instead, one must choose a particular independent loop set as the relevant description of the system, and the relative importance of a loop only has meaning in the context of the selected independent set. Although this may appear to be a weakness of the loop analysis, it does carry the advantage that the independent set is much smaller than the total loop set.

The eigenvalue elasticity of a particular link in the system could be determined directly by differentiating the characteristic polynomial, inserting the particular value of λ in question. However, one may also obtain these elasticities from simple linear operations, provided both the eigenvalue(s) and corresponding eigenvector(s) have been found. These can be obtained using efficient standard computer packages such as EISPACK (Press et al., 1988).5

To obtain the eigenvalues and eigenvectors requires calculating the Jacobian J. In the case of a non-reduced system, the Jacobian is found from Eq. (4). With an appropriate ordering of the auxiliary variables y, the matrix D has a zero-lower-diagonal structure

  • display math

which leads to a particularly simple form of Eq. (4)

  • display math

where U has the recursive structure

  • display math

Let l = (l1,l2, …, ln) and inline image be left and right eigenvectors, respectively, corresponding to the eigenvalue λ, normalized so that l ⋅ r = 1. Then it is straightforward to show that the change in λ from a change in the edge gains—the elements of the matrices in Eq. (4)—can be calculated from the following simple linear operations:

  • display math

I now show the followingix

Theorem 4. For any variable in a system dynamics model, the sum of eigenvalue elasticities of all outgoing links equals the sum of eigenvalue elasticities of all incoming links. This is true for both reduced and non-reduced versions of the model.

Proof. We first note that only edges that are part of a strongly connected component of the graph of the system affect the eigenvalues. Any edges not part of a strong component thus have an elasticity of zero, and we may drop them from consideration. We find the elasticity with respect to the gain he of edge e by differentiation of the characteristic equation P(λ) = 0, i.e.,

  • display math

Now let inline image be the set of all feedback loops that contain the edge e. Since the coefficients in P(λ) may be expressed as products of loop gains gc and since each term in P(λ) contains at most one loop that contains the edge e, we see that

  • display math

Let inline image be the set of incoming links and inline image the set of outgoing links, and let

  • display math

Then, since each of the loops involving the node in question must contain exactly one ingoing and one outgoing link for that node, it must be the case that inline image. We therefore get

  • display math

In network theory, a set of values that fulfill the condition that the sum of ingoing and outgoing values are equal is called a current in the network. (The physical interpretation is that there can be no accumulation of charge in any node in an electrical network.) It is a well-established fact that the space of all possible currents in a connected network with n nodes and N edges has dimension N − n + 1 (Nielsen, 1995). The complementary space of dimension n - 1 is spanned by all possible voltages in the network, a voltage being defined by the sum around any cycle in the network being zero. It is easy to see that the space of all currents is spanned by the cycle matrix C of the graph, since each row of C is obviously a current and since the rank r of C is N − n + 1.

Let Cr be the r × N directed cycle matrix for an independent loop set (r = N − n + 1). Then any current inline image in the network can be expressed as a unique linear combination inline image of the rows in Cr, i.e.,

  • display math(7)

where one may find v by backwards elimination in Eq. (7) using the reverse order in which the independent loops were constructed. v is often referred to as the loop current corresponding to the set of loops in Cr. We can thus define the loop elasticities as the vector v.

Note that if we extended our consideration to include all loops, v would no longer be unique. In a way, this is a restatement of the insight that the redundancies in a complete list of all loops imply that the marginal effect of a loop can only be defined in the context of an independent loop set, of which the loop is a part.

In finding v in Eq. (7) for the independent loop set in question, it is useful to note that for those edges that are only contained in a single loop, the edge and loop elasticities are equal. Indeed, the matrix Cr has a sufficiently simple structure that it ought to be possible to find efficient ways of solving the equation, without having to invert the full matrix.

Example: the long wave model

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

We now proceed to illustrate the method in a concrete example: John Sterman's simple long-wave model. The model was chosen because (i) it is widely known, (ii) it has a relatively simple structure yet leads to complicated dynamics, and (iii) nonlinearities play a key role in its dynamics, hence shedding some light on the applicability of the eigenvalue approach to nonlinear systems. I assume that the reader is already familiar with this model; details can be found in Sterman (1985) and Haxholdt et al. (1995). The detailed equations of the model are provided in Table 2 and a flow diagram is shown Figure 3. Further details of the model and a Vensim model file are provided in the electronic supplement.

Table 2. Equations of the simple long wave model. Analytical details of the nonlinear functions f and g are provided in the electronic supplement to this article
inline imageCapital stock [units]
inline imageSupply line [units]
inline imageBacklog [units]
a = Sx/BAcquisitions [units/year]
d = K/δDepreciation [units/year]
o = dg(o/d)Capital-sector orders [units/year]
g(.) (see below)Nonlinear function [dimensionless]
x = cf(x/c)Production [units/year]
f(.) (see below)Nonlinear function [dimensionless]
c = K/κCapacity [units/year]
y = exogenousGoods-sector orders [units/year]
o = d + (k − K)/τK + (s − S)/τSDesired orders [units/year]
k = κxDesired capital stock [units]
x = B/δDesired production [units/year]
s = dB/xDesired supply line [units]
τ = 20.0Average lifetime of capital [years]
δ = 1.5Normal delivery delay [years]
κ = 3.0Capital-output ratio [years]
τK = 1.5Time to adjust capital [years]
τS = 1.5Time to adjust supply line [years]
image

Figure 3. Flow diagram of the simple long-wave model. Numbers refer to the independent loop set in Table 3

Download figure to PowerPoint

Table 3. Independent loop set in the simple long-wave model
No.NamePolarityOrderSequence
1Self-ordering+1o[RIGHTWARDS ARROW]B[RIGHTWARDS ARROW]x[RIGHTWARDS ARROW]k[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g
2Hoarding+1o[RIGHTWARDS ARROW]B[RIGHTWARDS ARROW]s[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g
3Supply line correction-1o[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g
4Overtime-1B[RIGHTWARDS ARROW]x[RIGHTWARDS ARROW]f[RIGHTWARDS ARROW]x
5Supply line first-order control-1S[RIGHTWARDS ARROW]a
6Capital correction-2K[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]a
7Capital decay-1K[RIGHTWARDS ARROW]d
8Capital expansion+2o[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]K[RIGHTWARDS ARROW]d
9Steady-state capital A+2o[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]K[RIGHTWARDS ARROW]d[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g
10Steady-state capital B-2d[RIGHTWARDS ARROW]g[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]K
11Steady-state supply line+2d[RIGHTWARDS ARROW]s[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]K
12No easy interpretation-2B[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g[RIGHTWARDS ARROW]o
13No easy interpretation+2B[RIGHTWARDS ARROW]x[RIGHTWARDS ARROW]f[RIGHTWARDS ARROW]x[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]S[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g[RIGHTWARDS ARROW]o
14Economic growth+1x[RIGHTWARDS ARROW]a[RIGHTWARDS ARROW]K[RIGHTWARDS ARROW]c
15Demand balancing-1a[RIGHTWARDS ARROW]K[RIGHTWARDS ARROW]c[RIGHTWARDS ARROW]f[RIGHTWARDS ARROW]x
16No easy interpretation-1B[RIGHTWARDS ARROW]x[RIGHTWARDS ARROW]f[RIGHTWARDS ARROW]x[RIGHTWARDS ARROW]s[RIGHTWARDS ARROW]o[RIGHTWARDS ARROW]g[RIGHTWARDS ARROW]o

Independent loop set

The model contains three state variables: Capital stock, K, supply line of capital on order, S, and order backlog, B. There are 13 flows and auxiliary variables and 30 links between variables. The corresponding graph thus has N = 30 edges and n = 16 nodes. All the system variables are strongly connected, except the exogenous goods-sector orders, y, which thus counts as a second strongly connected component, i.e., the graph has s = 2 strong components. Applying the loop-finding algorithm found in the electronic supplement, one finds that the model contains 36 feedback loops, while the size of the independent set, according to Theorem 1, is N − n + s = 16.

We can illustrate the procedure to construct an independent set on the flow diagram in Figure 3. The first step is to identify a loop in the system. Since the model is built around the idea of the capital-self-ordering, it is natural to begin with this positive loop, and we set

  • display math

Next, we identify a new link from one of the nodes in c1, e.g., the link B[RIGHTWARDS ARROW]s, and a shortest path from s* to a node in c1, e.g., s[RIGHTWARDS ARROW]o. We construct a second cycle from these new links plus selected links from c1:

  • display math

This loop is the “hoarding” mechanism, a positive loop that arises because increased delivery delays increase the desired supply line, leading to further ordering and hence even longer delivery delays.

We repeat the procedure, choosing a link from a node in c1 ∪ c2, e.g., o[RIGHTWARDS ARROW]S, a shortest path back to c1 ∪ c2, e.g., S[RIGHTWARDS ARROW]o, and construct a cycle

  • display math

The loop is the “supply line correction” mechanism, which regulates orders to adjust the supply line towards the desired level, taking account of capital already ordered but not yet delivered. It is a necessary component in any stock-management task with a delay, but is frequently ignored by human decision makers (Diehl, 1989; Sterman, 1989a, 1989b).

One may proceed in this fashion, until all edges in the strongly connected parts of the system are exhausted. The result is a list of independent loops. One such list, on which we shall base the subsequent analysis, is provided in Table 3. The loops have also been indicated in the flow diagram of the model in Figure 3. Most of the loops have an intuitive interpretation, as indicated by their names. However, a few of them, typically those of higher order and with long causal chains, are difficult to interpret. One would hope that such obscure loops turn out not to play a major role in the dynamics. For the most part, this is indeed true, but not always, as we will see in the analysis below.x

Model behavior

The model, when simulated, quickly settles into a limit cycle with a period of approximately 50 years. Figure 4 shows the behavior of a few key variables, as well as the three eigenvalues of the linearized system. Furthermore, Figure 5 plots the gains of each of the independent loops from Table 3. In the following discussion of feedback loops, numbers in parentheses refer to the loop numbers in Table 3.

image

Figure 4. Simulation (top) and eigenvalues (bottom) of the simple long-wave model

Download figure to PowerPoint

image

Figure 5. Feedback loop gains. Note the difference in scale in the top and bottom portions of the figure

Download figure to PowerPoint

The strong nonlinear effects in the model are evident in large changes in both the eigenvalues and the loop gains of the system. One may divide the long-wave cycle into four phases, which we will name “self-order growth”, “capital growth”, “self-order collapse”, and “capital depreciation”, respectively. (The four phases are denoted by roman numerals in Figures 4 and 5.)

In the first phase, the order backlog B grows rapidly as a result of the self-ordering mechanism (1): orders o from the capital sector to itself swell the backlog, increasing the desired production capacity x* and capital k*, which leads to yet more orders for capital. However, because capital orders are limited by the nonlinear function g(o) which sets a maximum for the expansion rate of the capital stock, the gain of the self-ordering loop drops close to zero after just 2–3 years!

At this point, the system enters the second phase, where further growth in capital and orders is mainly determined by two positive loops: the “economic growth” loop (14) reflects the standard physical capital accumulation in a growing economy. The “capacity expansion” loop (8) arises because, when orders are limited by the “maximum fractional expansion” expressed in the nonlinear function g(o), the order rate o is anchored on capital depreciation d, implying that orders are now proportional to the capital stock.

After about 6 years, capacity catches up with desired output, and the loop gains once again shift back to their constellation in phase I, as the system enters the third phase: “self-order collapse”. Now, the positive loops work in reverse, quickly driving desired production down.

Once again, however, the gain of the self-ordering loop is driven to zero in a few years, this time by the non-negativity constraint on orders, again reflected in the function g(o). The system now enters the final phase, where capacity slowly falls and the system behavior is dominated by the single negative loop (2) related to capital depreciation.

Loop elasticities

Although most of these results can be found by simply observing the shifting gains of the feedback loops over time and applying “old-fashioned” intuition, the eigenvalue elasticity analysis yields further insight into the role of individual loops.

During the “self-order growth” phase, the rapid growth is attributable to a pair of complex conjugate eigenvalues with a positive real part. The time constant for the exponential growth is about 3 years, while the period is too long (about 12 years) to play much of a role in the dynamics. Therefore, it is primarily a change in the real part of the eigenvalue that would indicate a strong effect on the dynamics at this point in time.

The elasticity of the real and imaginary part of the positive eigenvalue pair is shown in Figure 6. It is clear from the figure that the self-ordering loop plays a dominant destabilizing role in this mode, with a very large elasticity of the real part. The main stabilizing influences come from the negative supply-line and capital correction loops, though both also increase the imaginary part of the eigenvalue. It is worth noting, however, that the loop gains and dominances shift very rapidly during the early expansion phase. For instance, a more detailed year-by-year analysis (not included here) reveals that the “overtime” loop (4) is stabilizing at the very beginning of the phase, but quickly loses influence.

image

Figure 6. Loop eigenvalue elasticities for the positive complex eigenvalue pair λ = a ± ib = 0.319 ± 0.444 during phase I (year 143) of the long-wave cycle. Bars show elasticities for real part, inline image, and the imaginary part, part inline image, respectively. Inset shows eigenvalues in the complex plane

Download figure to PowerPoint

During the “capital growth” phase, the continuing growth is attributed to a single positive real eigenvalue with a time constant of about 5 years. The loop elasticities for this eigenvalue are shown in Figure 7. Now the self-ordering loop plays no role at all. Instead, the main destabilizing influence comes from the above-mentioned capital-expansion and economic-growth loops. Likewise, the overtime loop, though it is a negative loop, is destabilizing in this phase: Because growth is anchored on the increase in the capital stock, overtime allows for more output and hence faster capital accumulation, boosting the growth rate of the system. More surprisingly, the supply-line correction loop (3) is also destabilizing. However, its destabilizing influence is largely cancelled by a more obscure positive loop (13) in the system that shares many of its links. This is an example of how the loop analysis may not always lead to simple intuitive results.

image

Figure 7. Loop eigenvalue elasticities, inline image, for the positive eigenvalue λ2 = 0216 during phase II (year 150) of the long-wave cycle. Inset shows eigenvalues in the complex plane

Download figure to PowerPoint

During the “self-order collapse” phase (Figure 8), there is once again an unstable pair of complex eigenvalues, with a time constant of about 2.5 years and a period of 35 years; the real part thus dominates the dynamics. As in the first phase, self-ordering plays a predominant destabilizing role, but the picture is complicated, as many other loops affect the eigenvalue, particularly its imaginary part. The “economic growth” loop (14) is clearly destabilizing, as is the overtime loop (4); because they both increase the rate of capital accumulation, they exacerbate the peaking of capacity during the phase. As expected, the “capacity-expansion” loop (8) plays no role during this phase, since orders are no longer anchored on capital stock. However, one must again be aware that loop gains and dominances shift very rapidly, so that a snapshot picture may not fully represent the whole time interval.

image

Figure 8. Loop eigenvalue elasticities for the positive complex eigenvalue pair λ = a ± ib = 0423 ± 0184 during phase III (year 155) of the long-wave cycle. Bars show elasticities for real part, inline image, and the imaginary part, part inline image, respectively. Inset shows eigenvalues in the complex plane

Download figure to PowerPoint

Finally, during the “capital depreciation” phase (Figure 9), most of the feedback loops in the system are shut off by the nonlinear mechanisms that prevent orders, backlog, and supply line from falling below zero. The system now contains three real negative eigenvalues, the largest having a time constant of about 20 years, corresponding to the average lifetime of capital. The loop elasticities clearly show that three loops control the three modes. The 20-year decay is caused by the negative first-order capital-depreciation loop (7). The two other negative eigenvalues relate directly to the overtime loop (4) (which acts as a first-order control on output preventing backlog from falling below zero), and a similar first-order control loop (5) that controls shipments to the capital sector and prevents the supply line from becoming negative.

image

Figure 9. Loop eigenvalue elasticities for the three negative eigenvalues, λ1 = − 1.372, λ2 = − 1.042, λ3 = − 0.047, during phase IV (year 175) of the long-wave cycle. Inset shows eigenvalues in the complex plane

Download figure to PowerPoint

It is interesting to note the role of the “hoarding” mechanism (2). The elasticity analysis indicates that this loop plays no role in the dynamics at any time. This is clearly confirmed in Figure 10, which compares the behavior of the original model to one in which the hoarding loop has been disabled by letting the desired supply line s* be based on the fixed normal delivery delay δ, i.e., s* = . It is seen that the dynamics of the two models are virtually identical. The intuitive reason is that the hoarding mechanism is much weaker than self-ordering, so that it is simply not fast enough to “catch up” during the stages where the two loops are active. At other times, both loops are disabled by the nonlinearity in ordering.

image

Figure 10. Comparison of model behavior to original when hoarding is disabled. The plot shows the desired production x*

Download figure to PowerPoint

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

The work presents a method which may be a step toward a systematic analysis of the role of feedback loops in system behavior. The most significant contribution is the notion of an independent loop set, which gives grounds for optimism about using the method with large-scale models, even though these will contain millions of feedback loops. The example of the simple long wave model demonstrates how the method can aid in getting a deeper understanding of the dynamics, even when the system is highly nonlinear.

Although the present paper hopefully demonstrates the power of formal analytical tools, there is much room for improvement and further work. First, the eigenvalue analysis says nothing about how the behavior of individual variables is affected by feedback loops, which is particularly a problem when the eigenvalues are complex numbers. To answer this requires consideration of the eigenvectors of the system, e.g., expressed as the participation factors (Eberlein, 1984). Unlike the eigenvalues, the eigenvectors cannot be expressed as functions of loop gains only and one would therefore no longer have a clear definition of the relative importance of a particular loop. Nevertheless, it ought to be possible to combine the loop gain analysis with the participation factor analysis and perhaps yield results that are less abstract than the eigenvalue analysis alone.xi

Another issue relates to the non-uniqueness of an independent loop set. As previously mentioned, one must choose a particular loop description of the system (an independent set) on which to base the analysis—otherwise the notion of loop elasticities has no meaning. But there is typically a very large number of such sets to choose from, and the elasticity measure of a particular loop will depend substantially on what other loops are chosen for the independent set. Ideally, one would want the maximum possible “separation” in loop elasticities, i.e., a situation where a few loops have very large elasticities while the others are close to zero. It may be possible to augment the procedure in Theorem 1 to make it more likely that one ends up with such a set. The procedure constructs new loops by incorporating new links from the system, so that the new loop is the only one so far to contain those links. For any link that only occurs in one loop in the independent set, the elasticity of the link and the loop are equal. If, therefore, one could “save” those links that have large numerical elasticities to the very end of the procedure, it would be more likely that such links would end up only appearing in a single loop, which would consequently have a large elasticity. Clearly, there is room for further exploration of this possibility, perhaps building on graph-theory techniques from transportation networks.

Another direction may be to generalize the notion of gains by incorporating the concepts of transfer functions and LaPlace transforms from classical control theory. In the present analysis, the definition of gains ignores integrations (levels) along the causal chain. One could define the gain of an integration by the LaPlace transform inline image. Moreover, one could perhaps represent entire subsystems as single “nodes” in the system, where the gain is defined as the transfer function T(s) of the system.

Consider, for instance, a simple information delay, where the output y is an exponential smooth of the input x, with time constant τ, i.e.,

  • display math

In the present analysis, we would identify a path inline image with gain 1/τ and a loop inline image with gain − 1/τ. If instead we use the LaPlace transform, we find that

  • display math

which we can interpret as a single link x[RIGHTWARDS ARROW]y with gain T(s). This seems closer to the way in which models are conceived and built, where the delay is just a “modification” of the basic causal link from x to y.

Perhaps it is even possible to employ a canonical approach, where one starts with major loops between sectors of the model. (The case of a multi-input/multi-output connection can be tackled using a matrix form of the transfer function.)

The use of LaPlace transforms ties in nicely with George Richardson's (1995) analysis. By inserting a value of s equal to the eigenvalue in question, one can obtain a measure of the gain and phase of loops and links, as they pertain to this particular mode. What remains to be seen, however, is whether one can inherit the formalism in the present paper to this generalized notion of gains.

In any case, a number of alternative formal tools and representations ought to be made available to the practitioner, since the real utility of such tools could only be assessed by trying them out in practice. The belief of the author is that many of these tools may be found in discrete mathematics, graph theory, and network theory. The final point of the paper is thus an appeal to fellow system dynamicists to look further into this discipline. Perhaps a place to start is the field of “qualitative simulation” (Dolado, 1992).

Endnotes
  • i

    Following the original presentation of this paper, the issue of where the method seems most useful was treated extensively in Kampmann and Oliva (2006). Investigating three case studies, they found that the method shows substantial promise for quasi-linear oscillating models, while the usefulness for single-transient models or very highly nonlinear models will depend upon the particular circumstances.

  • ii

    The “dot” notation, inline image, refers to the time derivative inline image.

  • iii

    The 1996 version of the paper used boldface for vectors and matrices. In the present version, I have adopted the modern notation where vectors and matrices are written like other variables. Given that it is always clear form the context whether the symbol represents a vector or a scalar, the modern notation is easier to read.

  • iv

    This section and the supporting material in the original appendix contained some errors. (My thanks to Sergio Quijada, University of Central Florida, for pointing them out.) These have been corrected in the present version of the paper and the Appendix.

  • v

    The notation in the following is slightly different from the original paper, and some minor errors in the equations have been corrected.

  • vi

    The procedure to construct an independent loop set has been implemented as a computer algorithm in the Mathematica package in the electronic supplement. The supplement also lists the pseudo-code for the algorithm.

  • vii

    One might ask whether any independent loop set might be constructed by the algorithm in Theorem 1. This turns out not to be the case. First, we note that the algorithm implies that each new chosen loop in the set has at least one edge that is not in any of the previous loops. Thus it must be the case that there is at least one column in the reduced cycle matrix Cr that has all 0's, except for the last element, namely the column that corresponds to a new edge in the last loop. The example found in the electronic supplement on binary ranks shows an independent loop set where each column has an even number of 1's; hence this set cannot be constructed by the method in Theorem 1. On the other hand, for any independent loop set constructed by the procedure in Theorem 1, the gains of the loops in the set can be assigned arbitrary signs, since each additional loop in the set contains edges not found in any of the previous loops.

  • viii

    It is important to note that while the edge gains between strong components do not determine the eigenvalues, they do affect the eigenvectors of the system and consequently the degree to which a particular mode or exogenous variable is expressed in the behavior of a given variable.

  • ix

    Forrester (1983) pointed out the result but did not provide a proof of the assertion.

  • x

    Oliva (2004) subsequently developed the “Shortest Independent Loop Set” (SILS) algorithm in order to address the complexity of interpreting the loops. The algorithm constructs an independent loop set with the shortest possible loops and tends to produce more intuitively understandable loops. However, the SILS is not necessarily unique either. (See also Huang, 2012, for more recent developments on SILS.)

  • xi

    Subsequently, the use of eigenvectors in the analysis has been explored by a number of researchers (e.g., Güneralp, 2006; Gonçalves, 2009; Saleh et al., 2010).

Acknowledgements

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

My thanks to Jan Jantzen, Frank Nielsen, George Richardson, and Carsten Thomassen for providing valuable suggestions and comments. Errors remain my own. This research was supported by a postdoctoral grant from the Danish Research Council.

  • 1

    If inline image can be assigned a particular direction, e.g., in the case of a path or a cycle, then one may define vi = − 1 or vi = + 1, depending upon whether edge ei has the opposite or the same direction as the one assigned to inline image.

  • 2

    A simple algorithm to identify all loops in a system was implemented by the author in Mathematica™, along with a number of related graph-theory algorithms. The package is available in the electronic supplement.

  • 3

    Note that a model may well contain more reduced loops than non-reduced loops. This can happen whenever the model equations imply dependencies among the elements in the Jacobian of the reduced system, for instance when two levels are connected by a flow between them. The reduced loops, being based upon the Jacobian elements, do not take account of these dependencies.

  • 4

    A spanning tree in a graph S is a subset of the edges in S that connects all nodes in S but contains no cycles.

  • 5

    When the eigenvalues are required at several points in a simulation, e.g., due to nonlinearities and shifting gains, it may be possible to provide more efficient procedures that take advantage of the continuous changes in these quantities over time. This ought to be a field for further exploration.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information
  • Diehl EW. 1989. A study of human control in stock-adjustment tasks. In Computer-Based Management of Complex Systems: Proceedings of the 1989 International System Dynamics Conference, Milling PM, Zahn EOK (eds). Springer: Berlin.
  • Dolado JJ. 1992. Qualitative simulation and system dynamics. System Dynamics Review 8(1): 5581.
  • Eberlein RE. 1984. Simplifying dynamic models by retaining selected behavior modes. PhD thesis, MIT, Cambridge, MA.
  • Forrester JW. 1971. World Dynamics. Wright-Allen: Cambridge, MA.
  • Forrester NB. 1982. A dynamic synthesis of basic macroeconomic policy: implications for stabilization policy analysis. PhD thesis, MIT, Cambridge, MA.
  • Forrester NB. 1983. Eigenvalue Analysis of Dominant Feedback Loops. In Proceedings of the 1983 International System Dynamics Conference; 177–198.
  • Forrester JW, Senge PM. 1980. Tests for building confidence in system dynamics models. TIMS Studies in the Management Sciences 14: 209228.
  • Gonçalves P. 2009. Behavior modes, pathways and overall trajectories: eigenvalue and eigenvector analysis of dynamic systems. System Dynamics Review 25(1): 3562.
  • Güneralp B. 2006. Towards Coherent Loop Dominance Analysis: progress in eigenvalue elasticity analysis. System Dynamics Review 22(3): 263289.
  • Haxholdt C, Kampmann CE, Mosekilde M, Sterman, JD. 1995. Mode-locking and entrainment of endogenous economic cycles. System Dynamics Review 11(3): 177198.
  • Huang H, Howley E, Duggan J. 2012. Observations on the shortest independent loop set algorithm. System Dynamics Review 28(3): 276280.
  • Kampmann CE, Oliva R. 2006. Loop eigenvalue elasticity analysis: three case studies. System Dynamics Review 22(2): 141162.
  • Nielsen F. 1995. Grafteori: Algoritmer og Netværk [Graph Theory: Algorithms and Networks]. Polyteknisk Forlag: Lyngby, Denmark.
  • Oliva R. 2004. Model structure analysis through graph theory: partition heuristics and feedback structure decomposition. System Dynamics Review 20(4): 313336.
  • Press WH, Flannary BP, Teukolsky SA, Vetterling VT. 1988. Numerical Recipes in C. Cambridge University Press: Cambridge, UK.
  • Reinschke KJ. 1988. Multivariable Control: A Graph-Theoretical Approach. Springer: Berlin.
  • Richardson GP. 1995. Loop polarity, loop dominance, and the concept of dominant polarity. System Dynamics Review 11(1): 6788.
  • Saleh M, Oliva R, Kampmann CE, Davidsen PI. 2010. A comprehensive analytical approach for policy analysis of system dynamics models. European Journal of Operational Research 203(3): 673683.
  • Sterman JD. 1985. A behavioral model of the economic long wave. Journal of Economic Behavior and Organization 6: 1753.
  • Sterman JD. 1989a. Misperceptions of feedback in dynamic decision making. Organizational Behavior and Human Decision Processes 43(3): 301339.
  • Sterman JD. 1989b. Modeling managerial behavior: Misperceptions of feedback in a dynamic decision-making experiment. Management Science 35(3): 321339.

Biography

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information
  • Christian E. Kampmann is associate professor at the Department of Innovation and Organizational Economics at Copenhagen Business School (CBS). He has an M.Sc. in Engineering from the Department of Physics at the Technical University of Denmark (DTU) and a Ph.D. in system dynamics from the Sloan School of Management, MIT. His substantive research interests focus on sustainable business strategy and the dynamics of the transition to a renewable energy system, particular in the area of transportation and urban mobility. Furthermore, he is interested in advanced quantitative methods for system dynamics analysis, including eigenvalue analysis, statistical methods, and model testing techniques. He teaches sustainable business strategy and innovation, economics, statistics, and introductory system dynamics at CBS and the Technical university of Denmark.

Supporting Information

  1. Top of page
  2. Abstract
  3. Introduction
  4. Graph theory and dynamical systems
  5. Loop gains and eigenvalues
  6. Example: the long wave model
  7. Conclusion
  8. Acknowledgements
  9. References
  10. Biography
  11. Supporting Information

Supporting information may be found in the online version of this article.

FilenameFormatSizeDescription
sdr_1483_sm_mathematical_appendix.pdfWord document143KSupporting Information

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.