We consider a connected graph *G* with *n* vertices, *p* of which are *centers*, while the remaining ones are *units*. For each unit-center pair, there is a fixed assignment cost and for each vertex there is a nonnegative weight. In this article, we study the problem of partitioning *G* into *p* connected components such that each component contains exactly one center (*p-centered partition*). We analyze different optimization problems of this type by defining different objective functions based on the assignment costs, or on the vertices' weights, or on both of them. For these problems, we show that they are NP-hard on very special classes of graphs, and for some of them we provide polynomial time algorithms when *G* is a tree. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

In the broadcast version of the congested clique model, nodes communicate in synchronous rounds by writing -bit messages on a whiteboard, which is visible to all of them. The joint input to the nodes is an undirected -node graph , with node receiving the list of its neighbors in . Our goal is to design a protocol at the end of which the information contained in the whiteboard is enough for reconstructing . It has already been shown that there is a one-round protocol for reconstructing graphs with bounded degeneracy. The main drawback of that protocol is that the degeneracy of the input graph must be known *a priori* by the nodes. Moreover, the protocol fails when applied to graphs with degeneracy larger than . In this article, we address this issue by looking for *robust* reconstruction protocols, that is, protocols which always give the correct answer and work efficiently when the input is restricted to a certain class. We introduce a very simple, two-round protocol that we call Robust-Reconstruction. We prove that this protocol is robust for reconstructing the class of Barabási-Albert trees with (expected) message size . Moreover, we present computational evidence suggesting that Robust-Reconstruction also generates logarithmic size messages for arbitrary Barabási-Albert networks. Finally, we stress the importance of the preferential attachment mechanism (used in the construction of Barabási-Albert networks) by proving that Robust-Reconstruction *does not* generate short messages for random recursive trees. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

We study a new multiobjective job scheduling problem on nonidentical machines with applications in the car industry, inspired by the problem proposed by the car manufacturer Renault in the ROADEF 2005 Challenge. Makespan, smoothing costs and setup costs are minimized following a lexicographic order, where smoothing costs are used to balance resource utilization. We first describe a mixed integer linear programming (MILP) formulation and a network interpretation as a variant of the well-known vehicle routing problem. We then propose and compare several solution methods, ranging from greedy procedures to a tabu search and an adaptive memory algorithm. For small instances (with up to 40 jobs) whose MILP formulation can be solved to optimality, tabu search provides remarkably good solutions. The adaptive memory algorithm, using tabu search as an intensification procedure, turns out to yield the best results for large instances. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

]]>Based on solid theoretical foundations, we present strong evidence that a number of real-world networks, taken from different domains (such as Internet measurements, biological data, web graphs, and social and collaboration networks) exhibit tree-like structures from a metric point of view. We investigate a few graph parameters, namely, the tree-distortion and the tree-stretch, the tree-length and the tree-breadth, Gromov's hyperbolicity, the cluster-diameter and the cluster-radius in a layering partition of a graph; such parameters capture and quantify this phenomenon of being metrically close to a tree. By bringing all those parameters together, we provide efficient means for detecting such metric tree-like structures in large-scale networks. We also show how such structures can be used. For example, they are helpful in efficient and compact encoding of approximate distance and almost shortest path information and in quick and accurate estimation of diameters and radii of those networks. Estimating the diameter and estimating the radius of a graph (or distances between arbitrary vertices) are fundamental primitives in many network and graph mining algorithms. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

]]>This article deals with the problem of train rescheduling on a railway network. Starting from a defined network topology and initial timetable, the article considers a dynamic train rescheduling in response to disturbances that have occurred. The train rescheduling problem was mapped into a special case of the job shop scheduling problem and solved by applying a constraint programming approach. To improve the time performance of available constraint programming tool and to satisfy a selected objective function, a combination of three classes of heuristics are proposed: bound heuristics, separation heuristics, and search heuristics. Experimental evaluation of the implemented software in Belgrade railway dispatching area indicates that the proposed approach is capable of providing the support to a real-life operational railway control. In our solution, the dispatcher has the possibility of choosing the most suitable optimization criterion from the set of seven available ones. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

]]>Since the late 70s, much research activity has taken place on the class of dynamic vehicle routing problems (DVRP), with the time period after year 2000 witnessing a real explosion in related papers. Our paper sheds more light into work in this area over more than 3 decades by developing a taxonomy of DVRP papers according to 11 criteria. These are (1) type of problem, (2) logistical context, (3) transportation mode, (4) objective function, (5) fleet size, (6) time constraints, (7) vehicle capacity constraints, (8) the ability to reject customers, (9) the nature of the dynamic element, (10) the nature of the stochasticity (if any), and (11) the solution method. We comment on technological vis-à-vis methodological advances for this class of problems and suggest directions for further research. The latter include alternative objective functions, vehicle speed as decision variable, more explicit linkages of methodology to technological advances and analysis of worst case or average case performance of heuristics. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

]]>In this article, a heuristic is said to be *provably best* if, assuming
, no other heuristic always finds a better solution (when one exists). This extends the usual notion of “best possible” approximation algorithms to include a larger class of heuristics. We illustrate the idea on several problems that are somewhat stylized versions of real-life network optimization problems, including the maximum clique, maximum *k*-club, minimum (connected) dominating set, and minimum vertex coloring problems. The corresponding provably best construction heuristics resemble those commonly used within popular metaheuristics. Along the way, we show that it is hard to recognize whether the clique number and the *k*-club number of a graph are equal, yet a polynomial-time computable function is “sandwiched” between them. This is similar to the celebrated Lovász function wherein an efficiently computable function lies between two graph invariants that are
-hard to compute. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

Vasko et al., Comput Oper Res 29 (2002), 441–458 defined the cable-trench problem (CTP) as a combination of the Shortest Path and Minimum Spanning Tree Problems. Specifically, let be a connected weighted graph with specified vertex (referred to as the *root*), length for each , and positive parameters and . The CTP is the problem of finding a spanning tree of such that is minimized, where is the total length of the spanning tree and is the total path length in from to all other vertices of . Recently, Jiang et al., Proceedings of MICCAI 6893 (2011), 528–536 modeled the vascular network connectivity problem in medical image analysis as an extraordinarily large-scale application of the generalized cable-trench problem (GCTP). They proposed an efficient solution based on a modification of Prim's algorithm (MOD_PRIM), but did not elaborate on it. In this article, we formally define the GCTP, describe MOD_PRIM in detail, and describe two linearly parallelizable metaheuristics which significantly improve the performance of MOD_PRIM. These metaheuristics are capable of finding near-optimal solutions of very large GCTPs in quadratic time in . We also give empirical results for graphs with up to 25,001 vertices.

Elastic optical network (EON) is a novel optical technology introduced recently to provide flexible and multibitrate data transmission in the optical layer. Since many new network services including cloud computing and content delivery networks are provisioned with the use of specialized data centers located in different network nodes, in place of *one-to-one* unicast transmission, the anycast transmission defined as *one-to-one-of-many* gains much popularity as a quite simple way to improve network performance. Therefore, this article focuses on modeling and static optimization of anycast flows in EONs. In particular, a -hard Routing and Spectrum Allocation for Restoration of Anycast Flows (RSA/RAF) problem is formulated. Next, various optimization approaches are proposed to solve this problem, namely, integer linear programming (ILP) using branch and bound algorithm, constraint programming (CP), and various heuristic approaches. Extensive numerical experiments are run to evaluate and compare all proposed methods. The main conclusion is that in some cases the CP approach is more efficient than the ILP modeling. Moreover, the results show that the SA algorithm significantly outperforms other heuristic methods. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 253–266 2015

The present study deals with Elastic Flow Rerouting (EFR)—an original traffic restoration strategy for protecting traffic flows in communication networks (including wireless networks) against multiple link failures. EFR aims at alleviating the trade-off between practicability of traffic restoration and the cost of network resources observed in existing networking solutions. We present an extension of EFR capable of managing multiple partial link failures. We describe EFR and its extension, formulate the EFR related optimization problems, and discuss approaches for their resolution. We also discuss numerical results illustrating effectiveness of EFR in terms of the link capacity cost. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 267–281 2015

]]>The exact calculation of all-terminal reliability is not feasible in large networks. Hence estimation techniques and lower and upper bounds for all-terminal reliability have been utilized. Here, we propose using an ordered subset of the mincuts and an ordered subset of the minpaths to calculate an all-terminal reliability upper and lower bound, respectively. The advantage of the proposed new approach results from the fact that it does not require the enumeration of all mincuts or all minpaths as required by other bounds. The proposed algorithm uses maximally disjoint minpaths, prior to their sequential generation, and also uses a binary decision diagram for the calculation of their union probability. The numerical results show that the proposed approach is computationally feasible, reasonably accurate and much faster than the previous version of the algorithm. This allows one to obtain tight bounds when it not possible to enumerate all mincuts or all minpaths as revealed by extensive tests on real-world networks. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 282–295 2015

]]>Let be a simple graph with vertices and edges, a subset of terminals, a vector and a positive integer , called the diameter. We assume vertices are perfect but edges fail stochastically and independently, with probabilities . The diameter constrained reliability (DCR) is the probability that the terminals of the resulting subgraph remain connected by paths composed of or fewer edges. This number is denoted by . The general DCR computation problem belongs to the class of -hard problems. The contributions of this article are threefold. First, the computational complexity of DCR-subproblems is discussed in terms of the number of terminal vertices and the diameter . Either when or when and is fixed, the DCR problem belongs to the class of polynomial-time solvable problems. The DCR problem becomes -hard when is a fixed input parameter and . The cases where or is a free input parameter and is fixed have not been studied in the prior literature. Here, the -hardness of both cases is established. **Second, we categorize certain classes of graphs that allow the DCR computation to be performed in polynomial time. We include graphs with bounded corank, graphs with bounded genus, planar graphs, and in particular, Monma graphs, which are relevant in robust network design.** Third, we introduce the problem of analyzing the asymptotic properties of the DCR measure in networks that grow infinitely following given probabilistic rules. We introduce basic results for Gilbert's random graph model. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 296–305 2015

In data-communication networks, network reliability is of great concern to both network operators and customers. On the one hand, the customers care about receiving reliable services and, on the other hand, for the network operators it is vital to determine the most vulnerable parts of their network. In this article, we first study the problem of establishing a connection over at most (partially) link-disjoint paths and for which the total availability is no less than (). We analyze the complexity of this problem in generic networks, shared-risk link group networks and multilayer networks. We subsequently propose a polynomial-time heuristic algorithm and an exact integer nonlinear program for availability-based path selection. The proposed algorithms are evaluated in terms of acceptance ratio and running time. Subsequently, in the three aforementioned types of networks, we study the problem of finding a (set of) network cut(s) for which the failure probability of its links is largest. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 306–319 2015

]]>Geometric routing is an alternative to traditional routing algorithms in which traffic is no longer forwarded using lookup tables, but using coordinates in an embedding of the underlying network. A major downside of current geometric routing algorithms is their inability to handle network failures in a graceful manner. Moreover, they cannot deal with dynamic graph topologies. This article presents a geometric routing scheme that uses an embedding based on a spanning forest. Allowing nodes to select the optimal spanning tree leads to both shorter paths and natural traffic redirection in case of network failures. By constructing the forest in such a way that its disconnected components have low redundancy, their coverage is maximized. Results show that this system is able to operate gracefully in severe failure scenarios, without any form of path protection or restoration. By means of an embedding regeneration procedure, the routing scheme is able to continuously adapt to an altering network topology. This geometric routing algorithm effectively combines two key objectives, namely low path stretch and high robustness. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 320–334 2015

]]>With the increasing frequency of natural disasters and intentional attacks that challenge communication networks, vulnerability to cascading, and regional-correlated challenges is escalating. Given the high complexity and large traffic load of communication networks, these correlated challenges cause substantial damage to reliable network communication. In this work, we extend the GeoDivRP routing protocol to consider delay-skew requirement when using multiple geographically diverse paths for telecommunication networks under area-based challenges. We present a flow-diverse minimum-cost routing multicommodity flow problem. Furthermore, we present a nonlinear delay-skew optimization problem to balance between delay and traffic skew on paths. We investigate the tradeoff between the delay and skew in choosing multiple geodiverse paths. We implement GeoDivRP in *ns-3* to use the optimized paths given by the two optimization solutions and demonstrate their effectiveness compared to open shortest path first Equal-Cost Multi-Path routing in terms of overall link utilization. It guarantees the delay-skew constraint provided by the upper layer while satisfies the traffic demand imposed by multiple routing commodities in the telecommunication networks. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 335–346 2015

This article explores a recently introduced novel technique called the nested monitoring trail (m-trail) method in all-optical mesh networks for failure localization of any shared risk link group (SRLG) with up to undirected links. The nested m-trail method decomposes each network topology that is at least -connected into virtual cycles and trails, in which sets of m-trails that traverse through a common monitoring node (MN) can be obtained. The nested m-trails are used in the monitoring burst (m-burst) framework, in which the MN can localize any SRLG failure by inspecting the optical bursts traversing through it. An integer linear program (ILP) and a heuristic are proposed for the network decomposition, which are further verified by numerical experiments. We show that the proposed method significantly reduces the required fault localization latency compared with the existing methods. Finally, we demonstrate that nested m-trails can also be used in adaptive probing to find SRLG faults in all-optical networks. The nested m-trail based probing method needs a significantly reduced number of sequential probes. Thus, the method overcomes one of the important hurdles to deploy adaptive probing in all-optical networks: the large number of sequential probes needed to localize SRLG faults. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 347–363 2015

]]>The capacitated fixed-charge network design (FCND) problem considers the simultaneous optimization of capacity installation and routing of traffic where a fixed cost is paid for opening a link and a linear routing cost is paid for sending traffic flow(s) on that link. The routing decisions must be performed such that the traffic flows remain bounded by the installed link capacities. The FCND problem appears as a particular case of the combined multiperiod network design and traffic flow routing problem with time-dependent demands formulated in this article as a mixed integer linear program (MILP). A compact formulation based on the aggregation of traffic flows per destination and an extended formulation, where flows are decomposed by origin-destination pairs while keeping the requirement of destination-based routing, have been proposed in D. Papadimitriou and B. Fortz, IEEE International Conference on Communications (2014), pp.1124–1130 and IEEE Global Communications Conference (2014), pp.1303–1309, respectively. In this article, we propose to resolve this computationally challenging problem by means of the rolling horizon heuristic with the objective to decrease the computational time while degrading as less as possible the quality of the solution. The resulting improvements enable to progressively overcome the computational limits encountered when solving such problem, in particular, when the network size and number of periods increase. The improvements provided by the rolling horizon heuristic can be further exploited by extending the proposed model to account for different patterns of failures that may affect installed arcs over time. For this purpose, our generalized MILP formulation comprises a time-variable link maintenance cost function. We further analyze the quality of the results for the proposed formulation with different link maintenance cost functions with the objective to derive the best arc replacement strategy. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 364–379 2015

]]>Distributed storage of data files in different nodes of a network enhances its fault tolerance capability by offering protection against node and link failures. Reliability is often achieved through redundancy in one of the following two ways: (i) storage of multiple copies of the entire file at different locations (nodes) or (ii) storage of file segments (not entire files) at different node locations. In the file distribution scheme, file segments from a file are created in such a way that it is possible to reconstruct the entire file, just by accessing any segments. For the reconstruction scheme to work, it is essential that the segments of the file are stored in nodes that are *connected* in the network. However, in the event of node/link failures, the network might become disconnected (i.e., split into several connected components). We focus on node failures that are *spatially correlated* or *region based*. Such failures are often encountered in disaster situations or natural calamities where only the nodes in the disaster zone are affected. The first goal of this research is to design a *least cost* file storage scheme to ensure that no matter which region is destroyed; resulting in fragmentation of the network, a *largest connected component* of the residual network will have enough file segments with which to *reconstruct the entire file*. In case the least cost to ensure this objective is within the *allocated budget*, the storage design will be *all region fault-tolerant* (ARFT). In case the least cost *exceeds the allocated budget*, design of an ARFT file storage system design is impossible. The second goal of this research is to design file storage schemes that will be *maximum region fault-tolerant within the allocated budget*. The third goal of this research is to investigate the impact of the coding parameters and on storage requirements for ensuring *all region* or \textit{maximum region} fault-tolerant design. We provide *maximum region* fault-tolerant design. We provide approximation algorithms for the problems and evaluate their performance through simulation using two real networks and compare their results to the optimal solutions obtained using Integer Linear Program. The simulation results demonstrate that the approximation algorithms almost always produce near optimal results in a fraction of the time needed to find the optimal solution. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 66(4), 380–395 2015