HAMILTONICITY OF GRAPHS PERTURBED BY A RANDOM GEOMETRIC GRAPH

. We study Hamiltonicity in graphs obtained as the union of a deterministic 𝑛 -vertex graph 𝐻 withlinear degreesanda 𝑑 -dimensionalrandom geometric graph 𝐺 𝑑 ( 𝑛 , 𝑟 ) , for any 𝑑 ≥ 1. Weobtain an asymptotically optimal bound on the minimum 𝑟 for which a.a.s. 𝐻 ∪ 𝐺 𝑑 ( 𝑛 , 𝑟 ) is Hamiltonian. Our proof provides a linear time algorithm to ﬁnd a Hamilton cycle in such graphs.


I
Randomly perturbed graphs are one of the thriving areas in the study of random combinatorial structures, and many interesting results in this field have been proved in recent years.The main general goal in this area is to study properties of graphs which are obtained as the union of a deterministic graph (usually with a minimum degree condition) and a random graph , particularly when does not satisfy the property of interest and is unlikely to.
Research in this direction sparked off with the work of Bohman, Frieze and Martin [4], who studied Hamiltonicity (that is, the property of containing a cycle which covers every vertex of a graph) in ∪ , , where is an -vertex graph with minimum degree at least , for some ∈ (0, 1/2) (if ≥ 1/2, then Dirac's theorem guarantees that contains a Hamilton cycle), and , is the binomial random graph where each pair of vertices forms an edge independently with probability .They showed that, whenever ≥ ( )/ , asymptotically almost surely (a.a.s.) ∪ , is Hamiltonian, improving on the threshold for Hamiltonicity in , by a logarithmic factor.Since then, Hamiltonicity has also been considered in randomly perturbed directed graphs [4,19], hypergraphs [16,19,21] and subgraphs of the hypercube [9].Many other properties have been considered as well (e.g., powers of Hamilton cycles [1,6,10,23], -factors [3,7,8,15], spanning trees [5,18,20] or general bounded degree spanning graphs [6]), and in most cases significant improvements on the probability threshold have been achieved.To the best of our knowledge, all of these results consider (hyper/di)graphs perturbed by a binomial random structure, such as , , or its , counterpart.Only very recently, Espuny Díaz and Girão [12] considered Hamiltonicity in graphs perturbed by a random regular graph.

I M
, T U I , 98684 I , G .E-mail address: alberto.espuny-diaz@tu-ilmenau.de.Date: 29th September 2022.This research has been partially supported by the Carl Zeiss Foundation. 1 also extend to other ℓ norms, 1 ≤ ≤ ∞, the only difference being the value of the constant = ( , ).For more information about random geometric graphs, we recommend the works of Penrose [24,25].
Our goal here is to prove the following result.
This result is asymptotically best possible, up to the constant factor (which we will make no effort to optimise).The rest of the paper is organised as follows.In section 2, we set our notation and also introduce a probabilistic tool which will be important for our proof.We then prove theorem 1 in section 3. Finally, in section 4 we make several observations about our result, as well as some extensions that follow from its proof.

P
2.1.Notation.For any integer , we write [ ] ≔ { ∈ ℤ : 1 ≤ ≤ }.If we write parameters in a hierarchy, we assume they are chosen from right to left.To be more precise, whenever we write 0 < ≪ ≤ 1, we mean that there exists an unspecified, non-decreasing function : ℝ → ℝ such that the ensuing claim holds for all 0 < ≤ 1 and for all 0 < ≤ ( ).This can be immediately generalised to longer hierarchies, and also to hierarchies where one parameter may depend on two or more other parameters.We say that a sequence of events {E } ∈ℕ holds asymptotically almost surely (a.a.s.) if ℙ[E ] → 1 as → ∞.In all asymptotic statements, we will ignore rounding issues whenever these do not affect the arguments.
Most of our graph theoretic notation is standard.Given a graph , we use ( ) and ( ) to denote its vertex set and edge set, respectively.We always consider labelled graphs, meaning that whenever we say that is an -vertex graph we may implicitly assume that ( ) = [ ].If is a geometric graph (meaning here that each of its vertices is assigned to a position in ℝ for some integer ), then ( ) may interchangeably be used to refer to the set of positions to which the vertices of are assigned, and similarly the notation may refer to a vertex or to its position.If = { , } is an edge, we usually abbreviate it as = .Given any vertex ∈ ( ), we define ( ) ≔ { ∈ ( ) : ∈ ( )}, and ( ) ≔ | ( )| is its degree.We denote the minimum and maximum vertex degrees of by ( ) and Δ( ), respectively.Given a graph and a set of vertices ⊆ ( ), we denote by [ ] the graph on vertex set whose edges are all edges of which have both endpoints in .A path is a graph whose vertices can be labelled in such a way that If the endpoints of a path (the first and last vertices in the labelling described above) are and , we sometimes refer to it as a ( , )-path.Given a ( , )path and a ( , )-path ′ such that ( ) ∩ ( ′ ) = { }, we write ′ to denote the path obtained by concatenating and ′ (formally, this is the union graph of and ′ ).If ′ is a single edge , we will write this as .Multiple concatenations will be written in the same way.

Azuma's inequality.
Let Ω be an arbitrary set (in our case, we will take Ω = [0, 1] ), and let : Ω → ℝ be some function.We say that is -Lipschitz, for some positive ∈ ℝ, if, for all , ∈ Ω such that and are identical in all but one coordinate, we have that | ( ) − ( )| ≤ .The following consequence of Azuma's inequality (see, for instance, the book of Janson, Łuczak and Ruciński [17, Corollary 2.27]) will be useful for bounding the deviations of certain random variables.Lemma 2. Let 1 , . . ., be independent random variables taking values in a set Ω, and let : Ω → ℝ be an -Lipschitz function.Then, for any ≥ 0, the random variable ≔ ( 1 , . . ., ) satisfies that

P 1
Let 0 < 1/ 0 ≪ 1/ ≪ 1/ , ≤ 1, where ∈ ℕ. Throughout, we assume that ≥ 0 is a natural number.Let ≔ ( / ) 1/ , ≔ 1/⌈2 √ / ⌉ and ≔ .Our choice of ensures that is sufficiently large for all the ensuing claims to hold.(On an intuitive level, we may replace the hierarchy above by 0 < 1/ ≪ 1/ ≪ 1/ , ≤ 1, and may think of as essentially interchangeable with .)Tessellate [0, 1] using -dimensional hypercubes of side (intuitively, is close to /(2 √ ), but possibly slightly smaller to guarantee that the tessellation above exists).We refer to each of the smaller -dimensional hypercubes as a cell, and denote the set of all cells as C. Observe that |C| = − = / .We say that two cells 1 , 2 ∈ C are friends if their boundaries intersect (in particular, if the boundaries share a single point, they count as intersecting).It follows that each cell is friends with at most 3 − 1 other cells.Given any set of points ⊆ [0, 1] , we say that a cell is sparse with respect to if it contains at most ≔ 2 • 3 points, and we call it dense with respect to otherwise.
Consider a labelling of the vertices of ( ) as 1 , . . ., , and let 1 , . . ., be independent uniform random variables on [0, 1] .We may assume that none of the variables take their values in the boundary of any of the cells, as this is an event that occurs with probability 1.Consider the random geometric graph = ( , ) on the vertex set of obtained by assigning position to .Note that, by the definition of , if some ∈ ( ) lies in a cell ∈ C, then it is joined by an edge of to all other vertices in , as well as to all vertices in cells which are friends of .
Let us provide here a brief sketch of the proof.When considering the random geometric graph ( , ), one can show that a.a.s."most" cells will be dense with respect to the vertex set, and only a small proportion of them will be sparse.Now remove all sparse cells from consideration and define an auxiliary graph Γ, whose vertex set is the set of remaining cells, and where two cells are joined by an edge if they are friends.A simple algorithm allows us to construct a cycle containing all the vertices which lie in cells that form a connected component in Γ.Indeed, fix one such component and consider a spanning subtree .By definition, has bounded degree.One can then "walk" along the edges of to visit all cells, using each edge of exactly twice, and end in the starting cell.While doing this, and since the edges of correspond to cells which are friends, one can incorporate the vertices in the cells into a path (where a suitable number of vertices must be chosen each time a cell is visited).Eventually, this path gets closed into a cycle when the walk returns to the starting cell.The bound on the maximum degree of and the definition of sparse cells (with respect to ( )) are crucial for this process to work.This simple idea has been at the heart of many results about Hamiltonicity in random geometric graphs [2,11,22].Since "most" cells are dense and we can apply the previous algorithm to each component of Γ, we can obtain a set of cycles covering almost all vertices.In order to obtain a Hamilton cycle, all that remains is to join the cycles into a single, longer cycle and to incorporate the missing vertices.It is for this purpose that we will need to use the edges of (and also be somewhat careful when choosing the cycles above).The following definition will be crucial to achieve our goal.
Given any pair of (not necessarily distinct) vertices , ∈ ( ), we say that a cell is { , }-linked (with respect to ) if it contains two distinct vertices , ∈ ( ) \ { , } such that , ∈ ( ); if it does not contain such a pair, we say that it is { , }-unlinked.In the particular case when = , we will say that the cell is -linked/unlinked to mean that it is { , }-linked/unlinked.One can easily see an intuitive reason why this notion is useful.Say that a cell is sparse with respect to ( ) and contains two vertices and , and say we find a { , }-linked cell ′ which is dense with respect to ( ).Then, when "walking" through the component of Γ containing ′ , we may actually leave this component to reach , incorporate all vertices in into the path we are constructing, and then go back from to ′ .This will allow us to incorporate all vertices in sparse cells into the cycles we constructed above.A similar approach will allow us to join the different cycles.
We now proceed to formally prove theorem 1.We begin with the following claim.

Claim 1.
The following properties hold a.a.s.: (i) The number of cells which are sparse with respect to ( ) is at most e − /2 .(ii) For each pair of (not necessarily distinct) vertices , ∈ ( ), the number of { , }-unlinked cells is at most 4e − /4 .
For each ∈ ( ) and each cell , we have that ℙ[ ∈ ] = .Thus, since the variables are independent, for a fixed cell we have that (Here the first and third inequalities rely on the hierarchy we established at the beginning of the proof; in particular, we may think of as being much larger than .We also use the fact that = / .)Let ≔ ( 1 , . . ., ) be the number of cells which are sparse with respect to ( ), so [ ] ≤ −1 e − .Since is 1-Lipschitz, it follows by lemma 2 that (ii) We proceed in a similar way.For each positive integer , let : ([0, 1] ) → ℤ ≥0 be a function which, given a set of points 1 , . . ., ∈ [0, 1] , returns the number of cells which contain at most one point.Clearly, is 1-Lipschitz for every ∈ ℕ.Given any set ⊆ ( ), we say that a cell is -vacant if it contains at most one of the vertices of .
Fix two (not necessarily distinct) vertices , ∈ ( ), and let be the number of { , }-unlinked cells.Let ≔ ( ) ∩ ( ).We split the analysis into two cases.Assume first that ℓ ≔ | | ≥ /2.Let 1 , . . ., ℓ be the indices of the vertices of which lie in .It follows that, for a fixed cell , . Note that every { , }-unlinked cell is -vacant, so ≤ ′ .Since ℓ is 1-Lipschitz, by lemma 2 we conclude that ] ≤ e −Θ( ) .Assume now that ℓ < /2 and let ≔ ( ) \ ( ( ) ∪ { }) and ≔ ( ) \ ( ( ) ∪ { }).Note that | |, | | ≥ /2.By following the same argument as above, we have that, with probability at least 1−e −Θ( ) , the number of -vacant cells is at most 2e − /4 .The same holds for the number of -vacant cells.Note that every { , }-unlinked cell must be -vacant or -vacant.We thus conclude that ℙ[ ≥ 4e − /4 ] ≤ e −Θ( ) .Finally, the statement holds by a union bound over all pairs of vertices { , } ⊆ ( ). ◭ Condition on the event that satisfies the properties of the statement of claim 1, which holds a.a.s.Let C s be the set of cells which are sparse with respect to ( ) (to which we simply refer as "sparse cells" from now on), and let C d ≔ C \ C s .We define an auxiliary graph Γ with vertex set C d where two cells are joined by an edge whenever they are friends.In particular, Δ(Γ) ≤ 3 − 1.
Claim 2. The number of connected components of Γ is at most e − /3 .Proof.By claim 1(i), there are at most e − /2 sparse cells.Recall that the total number of cells is / .Thus, the number of components of Γ of size at least 1/ /(4 1/ ) is at most 4( / ) 1−1/ ≤ e − /3 /2.Now consider any component of Γ with fewer than 1/ /(4 1/ ) cells.We claim that the number of sparse cells which are friends with some cell of is at least 2 − 1.Indeed, choose an arbitrary cell 0 ∈ ( ), and choose a corner of the hypercube [0, 1] which is at distance at least 1/3 from 0 in each coordinate direction.Let us assume, without loss of generality, that said corner is {1} .Now, for ≥ 0, we follow an iterative process.Consider the cube of side length 2 having in a corner and growing in all directions towards {1} ; this is a union of 2 cells, all of which are friends with each other.If only contains sparse cells (other than ), we are done (and stop the iterative process), so assume otherwise.Then, there must be some dense cell +1 ⊆ \ .This new cell +1 , being a friend of , also lies in , and is a translation of by a vector all whose coordinates are non-negative and bounded from above by .This process keeps going until either we find an index such that contains 2 − 1 sparse cells or we reach one of the sides of the hypercube [0, 1] .However, reaching the side requires at least 1/(3 ) = 1/ /(3 1/ ) iterations, which contradicts the assumption on the order of , so our claim must indeed hold.
On the other hand, trivially no sparse cell can be friends with more than 3 − 1 cells which lie in distinct components of Γ.Now, a simple double counting argument guarantees that the number of components of Γ with fewer than 1/ /(4 1/ ) cells is at most 2 e − /2 ≤ e − /3 /2.Together with the bound in the first paragraph, this completes the proof.◭ With the statements of claims 1 and 2, we can already begin the main part of the proof.We begin by setting up some notation.Let C 0 ⊆ C s be the set of all cells which contain no vertices of ( ), and let C 1 ⊆ C s be the set of all cells which contain exactly one vertex of ( ).Let .Both F and F * constitute sets of "forbidden" cells which we will avoid when connecting vertices from different cells via edges of .These sets will be updated as we choose edges of to construct the Hamilton cycle.Indeed, as we choose edges of , each cell which contains a vertex incident to any of these edges will be added to F , so that the different choices of edges to not interact with each other.These edges of are chosen "from a cell", and F * will contain all dense cells from which we must choose edges in a "correct" way (namely, ensuring that our process works), and thus will be avoided when choosing edges of from other cells.We claim that, for the rest of the proof, we will always have C s ⊆ F and and assume so throughout.This bound will follow from the fact that we update F and F * at most 2e − /3 times, and each time the size of their union will increase by at most 3. Consider an auxiliary graph Γ ′ ≔ Γ.We are first going to modify this graph Γ ′ into a connected graph.We will update Γ ′ in − 1 steps, where ≤ e − /3 is the number of components of Γ (see claim 2).In each of these steps, we will add exactly one edge to Γ ′ , connecting two of its components.This auxiliary edge will correspond to a way in which we will later connect the cycles which we will construct in each component; we build the structure necessary for this at the same time as we update Γ ′ .Our definitions of F and F * are crucial in guaranteeing that the upcoming process can be carried out.In particular, F and F * will always be disjoint, none of the components of (the current form of) Γ ′ will be contained in F , and F * will always contain at most one cell of each component of Γ ′ .Given any cell ∈ C d , let Γ ′ ( ) denote the connected component of Γ ′ which contains .Initialise a set of vertices d and two sets of edges d and * d as empty sets.We proceed as follows.1.For each ∈ [ − 1], choose a smallest component of Γ ′ , and choose an arbitrary cell ∈ ( ) \ F (which exists since ( ) F ). Let and be two arbitrary distinct vertices in .Choose an arbitrary { , }-linked cell ′ ( ) ∈ C \ (F ∪ F * ∪ ( )); its existence follows from claim 1(ii), (3.1) and the fact that is a smallest component of Γ ′ .Add the edge { , ′ ( )} to Γ ′ .Let ∈ ( ) ∩ ′ ( ) and ∈ ( ) ∩ ′ ( ) be two distinct vertices.Add , , and to d , add and to * d , and add and to d .Then, add and ′ ( ) to F (if ∈ F * , remove it from this set, so that F and F * remain disjoint).Finally, if | (Γ ′ ( )) \ F | = 1, add this remaining cell to F * .Each iteration of step 1 reduces the number of components by one, so it follows that, after we perform all iterations, Γ ′ is connected.Moreover, a moment of thought reveals that F * must now be empty.
We next define some absorbing paths which will be used to incorporate all vertices in sparse cells into a Hamilton cycle.We define these iteratively in |C * | steps.We proceed as follows.
Let ≔ s ∪ d and ≔ s ∪ d .Note that, by construction, for each ∈ C we have that | ∩ | ≤ 2 and | ( [ ( ) ∩ ]) ∩ | ≤ 1.We are now ready to construct the Hamilton cycle.The main step for this is to construct a cycle in each component of Γ.We make sure that these cycles contain all edges of spanned by the vertices in the cells of the corresponding component.For each component of Γ, we proceed as follows.
4. Let be a spanning tree of .In particular, Δ( ) < 3 .Consider an arbitrary traversal of which, starting at a given cell, goes through every edge of twice and ends in the starting cell (this can be given, e.g., by a DFS on taking any cell 0 as a root).This traversal takes ≔ 2(| ( )| − 1) steps, each step corresponding to an edge of .We use this traversal to construct a cycle ℭ( ) as follows.
Assume the traversal starts in a given cell 0 .Choose a vertex 0 ∈ ( ( ) ∩ 0 ) \ and let 0 ≔ 0 ; this will be the beginning of a path which we will grow into ℭ( ).For notational purposes, set ( −1 ) ≔ .For each ∈ [ ] we define a path as follows.Let be the current cell in our traversal, and let −1 ∈ ( ( ) ∩ ) \ be the last vertex of −1 .Let ′ be the next cell of the traversal.Because and ′ are friends, every vertex in ′ is joined to every vertex in by an edge of .Choose an arbitrary vertex ∈ ( ( ) ∩ ′ ) \ ( ∪ ( −1 )).If this is the last time that is visited in the traversal of , let ′ be any path with vertex set ( ( ) ∩ ) \ ( −2 ) having −1 as an endpoint and such that, if the vertices in span some edge ∈ , then ∈ ( ′ ), and let ≔ −1 ′ ; otherwise, simply let ≔ −1 .To complete the cycle, let ′ be an ( 0 , )-path whose internal vertices are all vertices of ( ( ) ∩ 0 ) \ ( ) and such that, if the vertices in 0 span some edge ∈ , then ∈ ( ′ ).We then set ℭ( ) ≔ ∪ ′ .Observe that every cell contains at least = 2 • 3 vertices and is visited at most 3 times throughout the traversal; this, together with the fact that no cell spans more than one of the edges of , guarantees that the choices of vertices described throughout the process can always be carried out.
Let ℭ denote the graph which is the union of all the cycles constructed in step 4. In particular, ⊆ (ℭ).We can now combine the cycles into a single cycle spanning all vertices in cells of C d by letting ℭ ′ ≔ (ℭ \ d ) ∪ * d .In order to complete the proof, for each ∈ C * , replace the edge ∈ (ℭ ′ ) by .
Remark.By using results from percolation theory, for ≥ 2 one can show that a.a.s. the graph Γ actually contains a "giant" component which contains, say, more than 9/10 of the cells.This fact can be used instead of claim 2 to streamline our proof, simplifying some of its technical details, such as the need for the set F * .Indeed, step 1 can be avoided entirely: for every cell which does not lie in this giant component we may find a suitable { , }-linked cell which does, for some , ∈ ( ) ∩ , as in steps 2 and 3. Step 4 can then be applied only to this giant component.This approach, however, does not work when = 1, as Γ will a.a.s.not contain any component of linear size.

F
Theorem 1 is asymptotically best possible in the sense that, for each ∈ (0, 1/2), if we let = ( / ) 1/ for a sufficiently small constant , then there exist graphs with minimum degree such that ∪ ( , ) is a.a.s.not Hamiltonian.Indeed, let be a complete unbalanced bipartite graph with parts and of sizes and (1 − ) , respectively.Clearly, if ( , ) [ ] contains more than isolated vertices, then ∪ ( , ) cannot contain a Hamilton cycle.Our claim thus follows immediately from the following lemma (where we also allow to depend on ).Lemma 3. Let ≥ 1 be an integer, and let be the volume of the ball of radius 1 in dimensions.If 1/2 > = ( ) = ( −1/2 ), then a.a.s.
A natural questions arises from theorem 1: what can we say about Hamiltonicity of randomly perturbed graphs when is allowed to be a function of (which tends to 0)?This same question was recently considered by Hahn-Klimroth, Maesaka, Mogge, Mohr and Parczyk [14] when the random graph is binomial, and they extended the result of Bohman, Frieze and Martin [4] by proving that = ( , Θ(− log / )) is both sufficient and necessary.In view of lemma 3, the following seems a natural conjecture (and, if true, it would be best possible up to the value of the constant ).Conjecture 4. For every integer ≥ 1, there exists a constant such that the following holds.Let = ( ) ∈ (0, 1/2), and let be an -vertex graph with minimum degree at least .Then, a.a.s.∪ ( , (− log / ) 1/ ) is Hamiltonian.
We also wish to remark upon two features of our proof of theorem 1.First, the proof is constructive, meaning that it provides an algorithm to find Hamilton cycles in ∪ ( , ).In particular, if the properties of claim 1 hold (which occurs a.a.s.), it provides a deterministic algorithm that outputs a Hamilton cycle in ∪ ( , ).Furthermore, observe that, throughout the proof, we actually do not need claim 1(ii) to hold for every pair of vertices, but only for those that we pick throughout the process, which are only linearly many.This means that the properties of claim 1 can be checked in O ( 2 ) time, which is linear in the size of ∪ ( , ).Then, the construction of the Hamilton cycle also takes O ( 2 ) time.This follows directly from the proof, and can be checked by retracing the steps.
Second, all throughout the paper we have considered the ℓ 2 Euclidean norm for simplicity.Our proof generalises directly to all ℓ norms, 1 ≤ ≤ ∞, by adjusting some of the constants.
Furthermore, we note that, under the same conditions as in the statement of theorem 1, we can actually show that ∪ ( , ) is pancyclic.Indeed, our proof can easily be modified for this.From the Hamilton cycle that we construct, given that it contains many subpaths whose vertices actually form cliques in ( , ), one can iteratively reduce the number of vertices in the cycle.This can be balanced with also removing some of the paths which correspond to sparse cells, as well as leaves of the auxiliary graph Γ ′ , to prove that cycles of all lengths can be constructed.
Finally, we remark that our result opens the door to questions about other spanning structures in dense graphs perturbed by random geometric graphs.In this direction, Espuny Díaz and Hyde [13] very recently generalised theorem 1 to deal with powers of Hamilton cycles, showing that, for independent of , the required radius for the a.a.s.containment of the -th power of a Hamilton cycle in this setting is the same (up to constant factors depending on ) as that required for Hamiltonicity.This has a number of direct consequences, such as for questions related to -factors or 2-universality (which directly implies pancyclicity).
A I would like to thank Xavier Pérez-Giménez for some very helpful discussions about the topic of this paper.I am also indebted to anonymous referees for their helpful remarks.R