Estimating and increasing the structural robustness of a network

The capability of a network to cope with threats and survive attacks is referred to as its robustness. This paper discusses one kind of robustness, commonly denoted structural robustness, which increases when the spectral radius of the adjacency matrix associated with the network decreases. We discuss computational techniques for identifying edges, whose removal may significantly reduce the spectral radius. Nonsymmetric adjacency matrices are studied with the aid of their pseudospectra. In particular, we consider nonsymmetric adjacency matrices that arise when people seek to avoid being infected by Covid-19 by wearing facial masks of different qualities.

1. Introduction. Networks appear in many areas, including transportation, communication, social science, and chemistry; see, e.g., Estrada [5] and Newman [15] for many examples. An edge-weighted network is represented by a graph G = {N , E, W}, which consists of a set of nodes N = {n j } n j=1 , a set of edges E = {e j } m j=1 that connect the nodes, and a set of edge weights W = {w j } m j=1 that indicate the importance of the edges. The weights are assumed to be positive. For instance, in a road network, the nodes n j may represent cities, the edges e j may represent roads between the cities, and the edge weight w j may be proportional to the amount of traffic on the road represented by edge e j . We refer to a graph G as undirected if for each edge e j , there is an edge e kj that points in the opposite direction and has the same weight as e j . If this is not the case, then the graph G is said to be directed.
The adjacency matrix A = [a ij ] n i,j=1 ∈ R n×n associated with the graph G has the entry a ij = w k if there is an edge e k emerging from node n i and ending at node n j ; if the graph is undirected, then also a ji = w k . Other matrix entries vanish. Thus, the matrix A is symmetric if and only if the graph G is undirected. We will assume that there are no self-loops and no multiple edges. The former implies that the diagonal entries of A vanish. Typically, the number of edges, m, satisfies 1 ≤ m n 2 . Then the matrix A is sparse.
The maximum of the magnitudes of the eigenvalues of A is known as the spectral radius of A. We will denote the spectral radius of A by ρ(A). It has been shown that the spectral radius is an important indicator of how flu-type infections spread in the network that is associated with the adjacency matrix A; the smaller ρ(A), the less spread; see, e.g., [11,14] and below. This paper seeks to shed light on how the spectral radius of an adjacency matrix can be reduced by targeted edge perturbations, i.e., by reducing edge-weights or removing edges. It is well known that reducing an edgeweight, or removing an edge, does not increase the spectral radius of a nonnegative matrix; see, e.g., [9,Corollary 8.1.19]. We are interested in identifying which weights should be reduced, or which edges should be removed, to achieve a significant decrease of the spectral radius.
Howard et al. [10] discuss the benefits of wearing facial masks to reduce Covid-19 transmission. Several studies found 70% or higher efficacy of facial masks in protecting the wearer of Covid-19 infections. They found that wearing a mask protects people around persons wearing masks, as well as the people who wear a mask, but to lesser degree. Also the type of mask is important; see also Gandhi et al. [6,7] for related discussions.
Let the nodes in a graph represent people and the edge weights represent the possibility of getting a sufficient viral load to become ill with Covid-19. The modeling of facial masks of different quality results in a nonsymmetric adjacency matrix A associated with the graph; see Section 3. We refer to a network (or graph) that is robust against the spread of viruses as structurally robust, and measure the structural robustness in terms of the spectral radius of the adjacency matrix associated with the graph. A network is more structurally robust, the smaller the spectral radius is. We are interested in determining which edge-weights should be reduced, or which edges should be removed, to give a relatively large decrease in the spectral radius of the adjacency matrix.
This paper is organized as follows. Section 2 discusses the structural robustness of a network. In particular, the sensitivity of the eigenvalues to perturbations of the adjacency matrix A associated with the network is considered. Tridiagonal adjacency matrices that model the role of face masks are described. Section 3 is concerned with the calculation of the spectral radius of a large matrix, and with the determination of edges that should be eliminated, or whose weight should be reduced, to achieve a relatively large decrease in the spectral radius of A. Properties of the pseudospectrum of a matrix are reviewed and applied. Some large-scale computed examples are presented in Section 4, and concluding remarks can be found in Section 5.
2. Structural robustness. A formulation of structural robustness against the spread of viruses comes from spectral graph theory. Epidemiological theory predicts that if the effective infection rate of a virus in an epidemic is below the reciprocal of the spectral radius ρ(A) of the adjacency matrix A associated with the graph that represents the network, then the virus contamination in the network dies out over time. In more detail, assume a universal virus birth rate β along each edge that is connected to an infected node, and a virus death rate δ for each infected node. If the effective infection rate, given by β/δ, is below the epidemic threshold for the network, i.e., if then the infections tend to zero exponentially over time. In fact, the reciprocal of the spectral radius ρ(A) is a network-epidemic threshold in a Susceptible-Infectious-Susceptible (SIS) network, in which the evolution of the viral state s i (t) ≥ 0 of node n i , i = 1, 2, . . . , n, at time t is governed by the system of differential equations where s(t) = [s 1 (t), s 2 (t), . . . , s n (t)] T and e = [1, 1, . . . , 1] T ∈ R n . Indeed, if one has ρ(δ −1 βA) < 1, then s(t) → 0 as t → ∞; see, e.g., [11] and references therein. The smaller ρ(A) is, the higher is the structural robustness of the network against the spread a virus. Hence, in order to enhance the structural robustness of a network, one may want to reduce the weights of suitable edges in E of the graph G, or eliminate certain edges; see [14]. Let e j = [0, . . . , 0, 1, 0, . . . , 0] T ∈ R n denote the jth axis vector and assume that the entry a hk of A is positive. Consider the rank-1 matrix where the superscript T denotes transposition. Regard the perturbed adjacency matrix where ε > 0 is chosen small enough so that the matrix A is nonnegative. Assume for the moment that the graph G associated with the adjacency matrix A is strongly connected, i.e., that starting at any node of the graph, one can reach any other node of the graph by traversing the edges along their directions. This is equivalent to A being irreducible. Then the Perron-Frobenius theorem applies, see, e.g., [9,Chapter 8], and shows that the eigenvalue of A of largest magnitude is unique and equals ρ(A). This eigenvalue is commonly referred to as the Perron root of A. Moreover, right and left eigenvectors of A associated with the Perron root, respectively, are unique up to scaling. They can be normalized to be of unit Euclidean norm and only have positive entries. These normalized vectors are known as the right and left Perron vectors, respectively, of A. We define the spectral impact of the directed edge e hk ∈ E on the spectral radius of A as the relative change of the spectral radius ρ(A) induced by the perturbation of the edge (2.1), i.e., A first order approximation of s (ρ(A)) hk , when 0 < ε 1, is derived in [14,Eq. (17)] as follows. Observe that Let λ be a simple eigenvalue of A, and let x, y ∈ R n be associated right and left eigenvectors. Then the condition number of λ is defined as see, e.g., [16]. Here and throughout this paper · denotes the Euclidean vector norm or the spectral matrix norm. In particular, the condition number of the largest eigenvalue ρ(A) of A is given by Therefore, Notice that the first order approximation (2.2) of the spectral impact of the edge e hk ∈ E, which points from node n h to node n k , depends on the right and left Perron vectors of A, as well as on the weight of the edge e hk . To make ρ(A) smaller, we may consider reducing weight(s) associated with the largest coefficients (2.3). To determine these coefficients, one needs the Perron vectors u and v.
When the matrix A is symmetric, it is meaningful to require the perturbation of A also be symmetric. We therefore define the symmetric perturbation matrix Consider the perturbed matrix for some small ε > 0. Then a first order approximation of the spectral impact s (ρ(A)) hk of the undirected edges e hk , e kh ∈ E on the spectral radius ρ(A) is given by where we have used the fact that the right and left Perron vectors coincide, and see [14,Eq. (21)]. Remark 1. Let the adjacency matrix A = [a hk ] n h,k=1 ∈ R n×n be diagonalizable, i.e., A = XΛX −1 , where the columns of X ∈ R n×n are linearly independent eigenvectors of A, and Λ = diag[λ 1 , λ 2 , . . . , λ n ] contains the eigenvalues. Then where κ(X) = X X −1 is the spectral condition number of X. In particular, when A is symmetric, we have ρ(A) k = A k for all k.
A walk of length k starting at node n i and ending at node n j is a sequence of k + 1 nodes n 1 , n 2 , . . . , n k+1 with n 1 = n i and n k+1 = n j such that there is an edge e qp that points from node n p to node n p+1 for p = 1, 2, . . . , k; see [5,15]. Edges in a walk may be repeated. If the graph is unweighted, then the entry (i, j) of A k equals the number of walks of length k from node n i to node n j . For weighted graphs, the entries of A k are suitably modified. In view of the bounds (2.4), it may be a good idea to eliminate edges in long walks or to reduce the weight of such edges.
Consider the Frobenius matrix norm A F = n h,k=1 a 2 hk . The inequalities A ≤ A F and (2.4) suggest that in order to reduce ρ(A) the most, it may be a good idea to remove nodes of G with many edges, or to reduce the weights of edges that emerge from or end at these nodes. In other words, we would like to remove or reduce the Euclidean norm of rows and/or columns of the adjacency matrix whose Euclidean norm is relatively large. This can we achieved by removing specific nodes or by reducing edge weights.
We conclude this section with a few illustrations for some weighted graphs that are associated with tridiagonal adjacency matrices. First consider the case when each node represents a person, and all persons wear the same kind of facial mask. The persons are in a line and each person only can infect the following or preceding person in the line. The adjacency matrix is where the edge weight σ > 0 depends on properties of the mask. A high-quality mask corresponds to a small value of σ > 0. The graph associated with the matrix (2.5) is undirected, (strongly) connected, and weighted. Proposition 2.1. The Perron root ρ of the nonnegative symmetric tridiagonal Toeplitz matrix (2.5) is 2σ cos π n+1 . The Perron vector u = [u 1 , u 2 , . . . , u n ] T , suitably scaled, has the entries u k = sin kπ n+1 , 1 ≤ k ≤ n. In particular, when n is odd, the largest entry is u (n+1)/2 , and when n is even the two largest entries, u n/2 and u n/2+1 , have the same size.
Proof. Explicit formulas for eigenvalues and eigenvectors of tridiagonal Toeplitz matrices can be found in, e.g., [19].
Note that the Perron vector in Proposition 2.1 is independent of the numerical value σ = 0 of the entries of (2.5). Moreover, the Perron vector suggests that the node n (n+1)/2 for n odd, and the nodes n n/2 and n (n+2)/2 for n even, are the most important nodes of the graph; see, e.g., Bonacich [3]. This is in agreement with the intuition that the nodes "in the middle" of the graph are the best connected nodes and, therefore, the most important ones. According to the estimate (2.2), edges that connect these nodes to the graph have the largest spectral impact. Consequently, to decrease the spectral radius ρ(A) of the matrix (2.5) maximally, we should reduce the weights of the edges e hk , e kh ∈ E, where • h = (n + 1)/2 and k = (n + 3)/2, or h = (n + 1)/2 and k = (n − 1)/2, if n is odd; • h = n/2 and k = (n + 2)/2 if n is even. Note that setting the edge-weights to zero results in a disconnected graph. It is often meaningful to keep a small positive weight. This results in an irreducible adjacency matrix. Properties of tridiagonal matrices with some "tiny" positive off-diagonal entries have been studied by Parlett and Vömel [21].
Example 2.1. Let A ∈ R 25×25 be the symmetric tridiagonal Toeplitz matrix (2.5) with σ = 1. Thanks to Proposition 2.1, one easily computes the spectral radius ρ(A) = 1.985418 and the unit norm Perron vector u. If one chooses to reduce the weights for the edges e 13,14 and e 14,13 , as we suggested in the above discussion, then one obtains the perturbed adjacency matrix for a weighted graph, If, instead, one reduces the weights for the edges e 1,2 and e 2,1 and constructs the perturbed adjacency matrix 13,14 can be seen to be significantly larger than s (ρ(A)) 1,2 . This example shows the reduction of the spectral radius of the adjacency matrix to be much larger when the weight of an "important" edge is reduced than when the weight of a less important edge is reduced by the same amount. This illustrates the importance of well-connected people wearing high-quality face masks, which correspond to a small edge weight. We remark that ε > 0 is chosen fairly small in this example so that the estimate (2.2) is applicable.
In a description of an epidemic, every node of a realistic network through which an infectious disease might spread, may correspond to an individual as well as to a cluster of individuals. For instance, we can assign individuals with similar age or location to the same group. Such a high-level description may be simpler to analyze than a model that accounts for each individual. Consider a network in which each node n i corresponds to a place where a cluster of cohabiting people live. This model takes into account the lockdown protocol adopted by the Italian Government on Easter 2021, during one of the most delicate phases of the spreading of the COVID-19. On Sunday, April 4th, it was permitted to have Easter lunch only with a small number of friends or relatives. When meeting people who did not live together, it was recommended to wear a face mask indoors, also. In particular, two cohabiting adults, possibly accompanied by minor children, were allowed to do only one trip in order to reach a place (within the region) of two cohabiting friends or relatives. Moreover, it was permitted to host at one place up to two non-cohabiting people, plus minor children.
As an example, in a family of mother, father, and a 21 years old child living at place n i , the parents were permitted to visit two cohabiting relatives or friends at place n j and the child was permitted to receive two cohabiting relatives or friends at place n i . Therefore, if a trip corresponds to an edge, then a tridiagonal matrix can be seen as an effective approximation of the adjacency matrix of the "2021 Easter lunch network". Such a simplified model allows one to analyze the results and verify that the approach is reasonable at least in this simple situation. Moreover, the behavior of such a model may illustrate different scenarios for decision-makers in public health regarding the lockdown intensity. As an example, taking into account aerosol transmission of the COVID-19 virus in enclosed spaces, also the length of time spent at a place might be regulated in a hypothetical lockdown protocol to be adopted, e.g., for Christmas dinner on December 24th, 2021. This would result in weighting edges according to the duration of a visit. Alternatively, in order to mitigate the infectious disease spread, the Italian Government hypothetically might not permit a family unit or, better, a cluster of cohabiting people to divide. This would result in a "2021 Christmas dinner network" where an edge is weighted according to the number of people in a moving cluster and a node can have in-edges or out-edges.
We now turn to a more accurate model of the role of facial masks. Let node v i represent a person who wears a mask, and assume that the fraction w  for i = 1, 2, . . . , n − 1. This yields the adjacency matrix This model assumes that all interactions are of the same duration and that the distance between adjacent people is the same; a rescaling of the w (in) j and w (out) j is required to model interactions of different durations and of people being at different distances from each other. In any case, the matrix (2.6) typically is nonsymmetric.
We obtain an adjacency matrix that is simpler to analyze by projecting the matrix (2.6) orthogonally onto the subspace T of tridiagonal Toeplitz matrices of order n. Let T be the orthogonal projection of the matrix (2.6) onto T. Then where the superdiagonal entry t 1 is the average of the superdiagonal entries of the matrix (2.6), and the subdiagonal entry t −1 is the average of the subdiagonal entries of (2.6); see, e.g., [18]. When all w (in) j and w (out) j are positive, so are t 1 and t −1 , and it follows that the matrix (2.7) is irreducible.
Assume for the moment that w > 0 for all j. Then the matrices (2.6) and (2.7) are symmetric. It follows from a result due to Bhatia [1] that if the relative distance between the symmetric matrices (2.6) and (2.7) is small in the Frobenius norm, then the relative difference of the spectra of (2.6) and (2.7) also is small. In detail, let the matrices M 1 ∈ R n×n and M 2 ∈ R n×n be symmetric, and consider the relative distance between these matrices in the Frobenius norm, Order the eigenvalues λ j (M 1 ) of M 1 and λ j (M 2 ) of M 2 according to However, as the following example shows, the spectral radius of M 1 may be much smaller than the spectral radius of M 2 , also when d M1,M2 is small. Example 2.2. Let A ∈ R 100×100 be a symmetric tridiagonal irreducible matrix with uniformly distributed random entries in the interval [0, 1]. These entries were generated with the random number generator rand in MATLAB. Let T ∈ R 100×100 denote the closest symmetric tridiagonal Toeplitz matrix to A. We obtain d A,T = 0.49 and where the eigenvalues of A and T are ordered in non-increasing order. Figure 2.1 shows the eigenvalues of A and T as functions of their index. The extreme eigenvalues of A and T are seen not to be close. In particular, the spectral radius of T is quite a bit smaller than the spectral radius of A.
The following result is an analogue of Proposition 2.1 for nonsymmetric tridiagonal Toeplitz matrices.
Proof. Explicit formulas for eigenvalues and eigenvectors of tridiagonal Toeplitz matrices can be found in, e.g., [19].
Thus, the Perron root is determined by the arithmetic mean of t −1 and t 1 , while the Perron root of the matrix (2.7) is defined by the geometric mean of these quantities; cf. Proposition 2.2. Example 2.3. Let A ∈ R 25×25 be the tridiagonal Toeplitz matrix (2.7) with t −1 = 1.5 and t 1 = 0.5. This matrix may model a situation where the probability of inhaling infected droplets is three times larger than the probability of exhaling them; e.g., people wearing chirurgical masks. Proposition 2.2 yields ρ(A) = 1.719422 and the unit norm right and left Perron vectors u and v. It is easy to see that edge e 12,13 is a maximizer of max h,k α hk , where α hk = a hk v h u k ; see (2.2)-(2.3). In order to reduce the weight of e 12,13 , one constructs the perturbed matrix A = A + εE 12,13 = A − εa 12,13 e 12 e T 13 .
Setting ε = 0.1, we obtain ρ( A) = 1.713348. Thus, the spectral impact of the perturbation is s Assume that there is only one high-quality face mask available. This example shows which person should be wearing it to reduce the spectral radius the most. Notice that symmetrizing the matrix A would have given both the matrix and the results of Example 2.1.
3. Estimating and reducing the spectral radius. This section discusses several ways to estimate the spectral radius, and the right and left Perron vectors, of a large adjacency matrix A ∈ R n×n . If A just is required to be nonnegative, then there is a nonnegative vector x ∈ R n , such that Ax = ρ(A)x. However, this vector may not be unique up to scaling; see [9, Theorem 8.3.1 and p. 505]. In this section, we will assume that A is a nonnegative irreducible adjacency matrix. Then its right and left Perron vectors are unique up to scaling, and can be scaled to be of unit norm and have positive entries only. These vectors are used to determine which edgeweights to reduce to obtain a new adjacency matrix with, hopefully, a significantly reduced spectral radius. If our aim just is to determine the spectral radius of A, then irreducibility is not required.
Moreover, we analyze the behavior of the Perron root when the adjacency matrix is perturbed by the worst ε-size perturbation for ρ(A). This approach of approximating the rightmost ε-pseudoeigenvalue of A takes into account the possibility of the entries of A being contaminated by error and gives an estimate of the sensitivity of the structural robustness to a worst-case perturbation.
We first describe a computational method that is well suited for large networks, whose associated adjacency matrix is nonnegative and irreducible, but does not have other structure that can be exploited. Subsequently, we will discuss methods that are able to use certain structural properties.
3.1. Approximation of the spectral radius of a nonnegative irreducible matrix. Let A ∈ R n×n be a large nonnegative irreducible adjacency matrix. The approach of this section does not exploit any additional structure that A may possess. We determine approximations of the right and left Perron vectors of A by the twosided Arnoldi method. This method was first described by Ruhe [22] and has more recently been studied and improved by Zwaan and Hochstenbach [28].
We carry out the following steps: • Apply the two-sided Arnoldi method to A to compute the Perron root ρ(A), and the unit right and left Perron vectors u and v, respectively, with positive entries.
• Let For each edge e hk ∈ E in the graph that represents the network, the corresponding entry of E, i.e., v h u k , appears in the first-order approximation (2.3) of the spectral impact of the edge. The Perron root ρ(A + εE) of the matrix A + εE satisfies for |ε| sufficiently small; see Wilkinson [26,Chapter 2]. We refer to the matrix (3.1) as a Wilkinson perturbation. This is the worst perturbation for ρ(A) in the following sense. For any nonnegative matrix E with E = 1, one has with equality for the matrix (3.1). Moreover, We let ε > 0. Note that the spectrum of A + εE may be considered a very sparse approximation of the ε-pseudospectrum of A in the sense that the εpseudospectrum is made up of the spectra of all perturbations of A of norm at most ε; see Trefethen and Embree [25]. We only compute the spectrum for one perturbation, A + εE, of A. The size of ε used in the computations may depend on whether the adjacency matrix is contaminated by errors. For instance, the edge weights may not be known exactly; see Trefethen and Embree [25] for insightful discussions on pseudospectra. • Typically, the first order approximation is sufficiently accurate. In the rare occasions when this is not the case, we can compute an improved approximation by applying the (standard) Arnoldi method described, e.g., by Saad [23], or the implicitly restarted (standard) Arnoldi method described in [13] and implemented by the MATLAB function eigs. We note that the perturbed matrix A + εE is nonnegative and irreducible if this holds for A. Indeed, if all entries of the Perron vectors are positive, then so are all entries of A + εE for ε > 0. The Perron root ρ(A + εE) is a rightmost ε-pseudoeigenvalue of A. We note that ρ(A + εE) may be much larger than ρ(A) when the Perron root is ill-conditioned, i.e., when v T u is small.
The analysis in Section 2 suggests that in order to reduce ρ(A) by removing an edge of G, we should choose an edge e hk with a large weight a hk that corresponds to a large entry of the matrix vu T in (3.1); see (2.2)-(2.3). Removing an edge corresponds to setting its edge-weight to zero. We can in the same manner choose which edgeweight to reduce to a smaller positive value in order to reduce the spectral radius. When the adjacency matrix A is very large, we may consider replacing the vectors u and v in (3.1) by the vector e = 1 √ n [1, 1, . . . , 1] T and compute ρ(A) and ρ(A+ε e e T ) by the (standard) Arnoldi or restarted Arnoldi methods to determine the structural robustness of the graph with adjacency matrix A. This approach was applied in [20] to estimate pseudospectra of large matrices.
The large perturbation in ρ(A) illustrated in Example 3.1 would not have occurred if the sparsity structure of the matrix T would have been taken into account, i.e., if one only would allow perturbations of positive edge-weights. We therefore are interested in determining perturbations εE of A that take the sparsity structure of A into account.
3.2. Approximation of the spectral radius taking the sparsity structure into account. The method in this subsection is suitable when it is desirable that the perturbation εE of the adjacency matrix A has the same sparsity structure as A. Let S denote the cone of all nonnegative matrices in R n×n with same sparsity structure as A, and let M | S be the matrix in S that is closest to a given nonnegative matrix M with respect to the Frobenius norm. It is straightforward to verify that the matrix M | S is obtained by replacing all the entries of M outside the sparsity structure by zero. This approach takes possible uncertainty of the available edge-weights into account. The analysis in [20] leads to the following numerical method: • Apply the two-sided Arnoldi method to A ∈ S to compute the Perron root ρ(A), as well as the unit right and left Perron vectors u and v, respectively, with positive entries. • Project vu T into S, normalize the projected matrix to have unit Frobenius norm, and define We refer to the matrix (3.2) as an S-structured analogue of the Wilkinson perturbation. This is the worst S-structured perturbation for ρ(A); one has, by [16,Proposition 2.3], Hence, where κ S (ρ(A)) = vu T | S F v T u denotes the S-structured condition number of ρ(A); see [16,12]. We let ε > 0. Similarly as above, the spectrum of A + εE is a very sparse approximation of the S-structured ε-pseudospectrum of A in the sense that this pseudospectrum is evaluated by computing the spectrum for many S-structured perturbations of T of norm at most ε; see, e.g., [20]. Here we only compute the spectrum for one perturbation of T .
• If desired, compute ρ(A + εE) by the (standard) Arnoldi or restarted Arnoldi methods. We note that the perturbed matrix A + εE is nonnegative and irreducible if this holds for A, and exhibits the same sparsity structure as A. The Perron root ρ(A + εE) helps us to estimate the structural robustness of the network. Indeed, it represents an approximate S-structured ε-pseudospectral radius of the S-structured ε-pseudospectrum of the adjacency matrix A ∈ S.
We note that ρ(A + εE) may be much larger than ρ(A) when the Perron root has a large S-structured condition number κ S (ρ(A)).
As mentioned above, in case the network is very large, we may consider replacing the vectors u and v in (3.2) by the vector e. An analogous S-structured perturbation of the adjacency matrix A is given by We may apply the (standard) Arnoldi or implicitly restarted Arnoldi methods to estimate ρ(A) and This approach has been applied in [20] to estimate structured pseudospectra of large matrices.
3.3. Approximation of the spectral radius for perturbations of tridiagonal Toeplitz matrices. Structure respecting projections, analogous to the ones discussed in the previous subsection, also can be applied to impose other structures. This subsection illustrates how they can be used to impose tridiagonal Toeplitz structure. Let T be a nonnegative tridiagonal Toeplitz matrix (2.7). We denote by T the cone of all nonnegative tridiagonal Toeplitz matrices with zero diagonal in R n×n and by M | T the matrix in T closest to a given nonnegative matrix M ∈ R n×n with respect to the Frobenius norm. It is straightforward to verify that M | T is obtained by replacing the sub-and super-diagonal entries of M by their respective arithmetic mean.
To approximate the spectral radius of T ∈ T , we carry out the following steps: • Apply the formulas in Proposition 2.2 to T to compute the Perron root ρ(T ) and the unit right and left Perron vectors u and v, respectively, with positive entries. • Project vu T into T , normalize the projected matrix to have unit Frobenius norm, and define the matrix We refer to the matrix (3.3) as a T -structured analogue of the Wilkinson perturbation. Similarly as above, we have, by [17,Theorem 3.3], It follows that denotes the T -structured condition number of ρ(T ); see [17,12]. We will let ε > 0. The spectrum of T + εE is a sparse approximation of the T -structured ε-pseudospectrum of T in the sense that this pseudospectrum is evaluated by computing the spectrum for many T -structured perturbations of T of norm at most ε; see, e.g., [20]. Here we only compute the spectrum for one perturbation of T . • Determine ρ(T + εE) by applying Proposition 2.2 to T + εE. The latter matrix is nonnegative and irreducible if this holds for T , and exhibits the same structure as T .
The Perron root ρ(T + εE) may be regarded as an approximate T -structured ε-pseudospectral radius and provides an estimate of the structural robustness of the structured network. It may be much larger than ρ(T ). It is known that when considering the class T of tridiagonal Toeplitz matrices, the most ill-conditioned eigenvalues with regard to T -structured perturbations are the eigenvalues of largest magnitude; see, e.g., [19]. We remark that an algorithm for computing the T -structured pseudospectrum of a tridiagonal Toeplitz matrix and its rightmost pseudoeigenvalue is described in [4]. However, the computational cost of this algorithm can be quite large for the matrices considered in this paper.
Finally, replacing u and v in (3.3) by the vector e as described above is particularly efficient when the considered subspace is T ; see Section 4.2.2.
4. Numerical tests. his section illustrates the performance of the methods discussed when applied to large networks. All computations were carried out in MAT-LAB R2021a on a MacBook Pro with a 2GHz Intel Core i5 quad-core CPU and 16GB of RAM.

Complex networks.
4.1.1. Air500. Consider the adjacency matrix A ∈ R 500×500 for the network Air500, which describes 24009 flight connections between the top 500 airports within the United States based on total passenger volume during one year from July 1, 2007, to June 30, 2008; see [2]. Thus, the airports are nodes and the flights are edges in the graph determined by the network. The matrix A has the entry a i,j = 1 if there is a flight that leaves from airport i to airport j. Generally, but not always, a i,j = 1 implies that a j,i = 1. This makes A close to symmetric.
Apply the computational steps described in Section 3.1. The Perron root ρ(A) is 82.610276 with eigenvalue condition number κ(ρ(A)) = 1.001668. Let ε = 0.5. The Perron root ρ(A+εE), where E is the matrix in (3.1), is 83.111096. Thus, the spectral radius increases by 0.500820, as we could have foreseen since εκ(ρ(A)) = 0.500834. The value ρ(A+εE) is an accurate approximation of the ε-pseudospectral radius. This is seen by determining the ε-pseudospectral radius by the MATLAB program package Eigtool [27]. Our approximation of the ε-pseudospectral radius agrees with the value determined by Eigtool in all decimal digits returned by Eigtool. Pseudospectra of A are visualized in Figure 4.1.
Assume that we are interested in removing a single route so that the structural robustness of the network is increased the most. Then this route should be an edge that maximizes a h,k v h u k over h and k; see (2.2)-(2.3). For the present network, we find that the edge e 224,24 ∈ E should be removed. The adjacency matrix A so obtained is irreducible with ρ( A) = 82.590199. The edge e 224,24 corresponds to flights from the JFK airport in New York to the Hartsfield-Jackson airport in Atlanta.
Finally, we observe that if one replaces E in (3.1) by the matrix of all ones normalized to have unit Frobenius norm, the increase of spectral radius ρ(A) results to be 0.255450. Thus, this perturbation gives a significantly less accurate estimate of the sensitivity of ρ(A) to worst-case perturbations. the matrix (3.1), is 27.047941. Thus, the spectral radius increases by 0.502511, as we could have expected since εκ(ρ(A)) = 0.502609. The spectral radius ρ(A + εE) approximates the ε-pseudospectral radius and agrees with six significant decimal digits with the pseudospectral radius determined by Eigtool. A few pseudospectra of A are shown in Figure 4.2.
The route to remove, in order to increase the structural robustness of the network the most, is represented by the edge e 51,137 ∈ E. The adjacency matrix, A, obtained when setting the entry a 51,137 of A to zero is irreducible with ρ( A) = 26.452922.
Finally, we observe that if one replaces E in (3.1) by the matrix of all ones, normalized to have unit Frobenius norm, ρ(A) increases by 0.223135.

Enron.
The Enron e-mail exchange network covers the e-mail communication at the Enron company. The data set consists of over 3 · 10 5 e-mails. The e-mail addresses are the nodes n i of the network; there are 36692 of them. A directed edge from node n i to node n j indicates that at least one e-mail message was sent from n i to n j ; there are 367662 edges. Let A ∈ R 36692×36692 be the adjacency matrix for this graph. It is close to symmetric. This network is available at [24].
Computations described in Section 3.1 yield ρ(A) = 118.417715 and the condition number κ(ρ(A)) = 1.000000. Let ε = 0.5. The Perron root is ρ(A+εE) = 118.917715, where E is the matrix (3.1). As expected, the spectral radius increases by 0.500000. No comparison with Eigtool could be made to assess the accuracy of the so determined approximation of the ε-pseudospectral radius, since Eigtool is not able to determine ε-pseudospectra of such a large matrix. The spy plot of A is shown in Figure 4.3. In this plot positive matrix entries are marked by a dot.
The email-channel to remove is represented by the edge e 137,196 ∈ E. As for the adjacency matrix A, obtained when setting the entry a 137,196 of A to zero, one has ρ( A) = 118.398705. In fact, it is immediate to see from the spy plot in Figure 4.3 that the edge to be removed belongs to a "dense block" of A. Removing this edge yields a relative decrease of order 10 −4 in the spectral radius of the adjacency matrix of the Enron network.

Synthetic networks.
This subsection considers projections of the adjacency matrix for the Air500 network. 4.2.1. The tridiagonal part of Air500. We set all entries of the adjacency matrix for the Air500 network outside the tridiagonal part of the adjacency matrix to zero. The number of flight connections is now 144. This yields a nonsymmetric tridiagonal matrix A ∈ R 500×500 . Carry out the computations described in Section 3.2, with S the subspace of all tridiagonal matrices with zero-diagonal in R 500×500 . This yields the Perron root ρ(A) = 1.801938 and its S-structured condition number is κ S (ρ(A)) = 0.613714.
Computations similar to those of Subsection 4.1 suggest that in order to increase the structural stability the most by removing one edge, we should choose the edge e 494,493 or the edge e 493,494 in E. However, removal of one or both of these edges would result in a graph with a reducible adjacency matrix. To preserve irreducibility of the adjacency matrix, one may instead schedule fewer flights on the routes that correspond to the edges e 494,493 and e 493,494 . This reduces the weight associated with these edges.
Finally, we observe that, if one replaces the matrix vu T in (3.2) by the matrix of all ones, normalized to be of unit Frobenius norm, then the spectral radius increases by 0.052606. Clearly, this is not an accurate estimate of the actual worst-case sensitivity of ρ(A) to perturbations.

4.2.2.
Projection of Air500 into a tridiagonal Toeplitz structure. We construct a tridiagonal Toeplitz matrix with zero-diagonal T ∈ R 500×500 by averaging the sub-diagonal entries as well as by averaging the super-diagonals of the matrix in Section 4.2.1. Then we carry out the computations as described in Section 3.3, and make use of Proposition 2.2. We obtain ρ(T ) = 0.288460 and the T -structured condition number κ T (ρ(T )) = 0.063357.
Finally, we observe that, if one replaces the matrix vu T in (3.2) by the matrix of all ones, scaled to be of unit Frobenius norm, then ρ(T ) increases by 0.056995. Thus, the latter perturbation provides a very accurate estimate of the spectral radius when the matrix vu T in (3.3) is used.

5.
Conclusion. It is important to be able to estimate the structural robustness of a network, and to determine which nodes to remove or weights to decrease to increase the structural robustness. This paper describes several iterative methods that can be applied to fairly large networks to gain insight into these issues. Both the sensitivity of the structural robustness to worst-case Wilkinson perturbations and to structured perturbations are discussed and illustrated.