Resolution of a conjecture on majority dynamics: Rapid stabilization in dense random graphs

We study majority dynamics on the binomial random graph G(n, p) with p = d/n and d>λn1/2 , for some large λ>0 . In this process, each vertex has a state in {− 1, + 1} and at each round every vertex adopts the state of the majority of its neighbors, retaining its state in the case of a tie. We show that with high probability the process reaches unanimity in at most four rounds. This confirms a conjecture of Benjamini et al.


INTRODUCTION
Majority dynamics is a process on a graph G = (V, E) which evolves in discrete steps and at step t ≥ 0, every vertex v ∈ V has state S t (v) ∈ {−1, +1}. The state of each vertex changes according to the majority of its neighbors in G. Namely, given the configuration {S t (v)} v∈V just after step t, vertex v ∈ V has state where N(v) denotes the set of vertices adjacent to v in G, and sgn(x) = −1 if x < 0 and +1 if x > 0. In other words, v adopts the majority of its neighbors, whereas, in the case of a tie, it retains its current state. This class of processes can be seen as a generalization of a cellular automata such as those introduced by von Neumann [11]. In particular, it can be seen as a variation of the well-known Conway's Game of Life [3]. This is a two-state game on the 2-dimensional integer lattice, but with a slightly richer set of rules. In a different context, these processes were considered by Granovetter [7] as a model of the evolution of social influence. There is certain resemblance with the class of processes that are known as majority bootstrap processes, but the crucial difference is that majority dynamics is non-monotone in the sense that a vertex may change states multiple times. Thus, unlike the classical bootstrap processes, the process may never stabilize into a final configuration.
However, as Goles and Olivos proved in [6], if G is finite, then eventually (i.e., for t sufficiently large) the process becomes periodic with period at most 2. More specifically, there is a t 0 depending on G such that for any t > t 0 and for any v ∈ V we have S t (v) = S t+2 (v).
Majority dynamics is also a special case of voting with q ≥ 2 alternative opinions, see [8]. Each voter is assumed to be the vertex of a graph, and their initial opinions are selected from the set {1, … , q} independently of every other voter according to some distribution. At each round, a voter adopts the most popular opinion among its neighbors.
In this paper we consider the evolution of majority dynamics on G(n, p), which is the random graph on the set V n = [n] ∶= {1, … , n}, where every pair of distinct vertices is present as an edge with probability p independently of any other pair. We will consider this process on G(n, p) with initial configuration {S 0 (v)} v∈V n being a family of independent random variables uniformly distributed in {−1, +1}. That is, each vertex in V n initially is in state +1 with probability 1∕2, independently of the state of every other vertex.
Results regarding this setting were obtained recently by Benjamini et al. [2]. They showed that if p ≥ n −1∕2 where n > n 0 , for some sufficiently large constants , n 0 , then G(n, p) is such that with probability at least 0.4 over the choice of the random graph and the choice of the initial state, the vertices in V n unanimously hold the initially most popular state after four rounds. Benjamini et al. conjectured that in fact this holds with high probability. The main result of this paper is the proof of their conjecture. Theorem 1.1. For all 0 < ≤ 1 there exist , n 0 such that for all n > n 0 , if p ≥ n −1∕2 , then G(n, p) is such that with probability at least 1 − , over the choice of the random graph and the choice of the initial state, the vertices in V n unanimously have state sgn( ∑ v∈V n S 0 (v)) after four rounds.
In our proof we exploit the fact that typically the initial number of vertices in the two states differs by at least Ω( √ n) vertices. Tran and Vu [10] showed that when p is a constant, already a significantly smaller majority will lead to unanimity in 4 steps, with probability close to one. In particular, they showed that this happens already when one of the states exceeds the other by a large enough constant.
One might think that unanimity is reached for other classes of sparser random graphs or expanding graphs. However, Benjamini et al. [2] proved that for the class of 4-regular random graphs or 4-regular expander graphs, with high probability unanimity is not reached at any time, if the probability of state +1 in the beginning of the process is between 1∕3 and 2∕3. However, this is not the case for -regular -expanders where is the bound on the second-largest in absolute value eigenvalue, provided that ∕ ≤ 3∕16. Mossel et al. (Theorem 2.3 in [8]) showed that unanimity is reached eventually, under this assumption provided that the initial distribution of state +1 is sufficiently biased. This bias is of order 1∕

√
, that is, the assumption is that More recently, Zehmakan [12] proved a more general result on the evolution of majority dynamics on -regular expander graphs. In particular, he proved that on a -regular -expander graph G, when the initial configuration satisfies ∑ v∈V(G) S 0 (v) ≥ 4 n, majority dynamics will reach the configuration where every vertex has state +1 within O(log 2 ∕ 2 n) rounds. Also, Gärtner and Zehmakan [5] showed that if the initial density of the −1s is 1∕2− for some > 0, then the majority dynamics will eventually reach the configuration where every vertex has state +1.
Returning to the study of the process on G(n, p) with p = ∕n, Zehmakan [12] also showed that if and > (1 + ) log n, then with high probability the process reaches unanimity in a constant number of rounds. Another model of similar flavor is the model analyzed by Abdullah and Draief [1] where instead of reading its entire neighborhood, every vertex samples k random vertices from its neighborhood and adopts the state of the majority of the vertices in the random sample. Abdullah and Draief considered this model on G(n, p) with p = ∕n where ≥ (2 + ) log n. They showed that if the initial density of one of the two states is bounded away from 1∕2, then the above process will eventually arrive at unanimity with high probability. Moreover, the final is the one that has the initial majority.
Besides the study of majority dynamics on random graphs, Benjamini et al. [2] considered the question whether the Goles-Olivos theorem in [6], which guarantees eventual periodicity for finite graphs, also holds for infinite graphs which satisfy certain assumptions. They showed that this is the case for the class of unimodular transitive graphs. These are vertex-transitive graphs (and therefore regular) in which flows that are invariant under the automorphism group are such that for every vertex the in-flow equals the out-flow. They also showed that stabilization to periodicity occurs in at most 2 rounds, where is the degree of the graph. They conjectured that this is the case for every bounded degree infinite graph.
Majority dynamics on other classes of graphs were recently considered by Gärtner and Zehmakan [4]. They analyzed majority dynamics on an n × n grid as well as on a torus. The initial state is determined by a random binomial subset of the vertex set, where every vertex is initially set to −1 with probability p independently of every other vertex.
The rest of the paper is devoted to the proof of Theorem 1.1. In Section 2 we present a heuristic by Benjamini et al. [2] and outline the proof of Theorem 1.1. We study the states after the first two rounds and the last two rounds in Sections 3 and 4, respectively. The proof of Theorem 1.1 is presented in Section 5. We conclude the paper with discussions on a conjecture of Benjamini et al. [2] about smaller values of .

HEURISTIC AND PROOF OUTLINE
It will be more convenient to use the average degree as the parameter instead of the edge probability. Set = np and the condition on p in Theorem 1.1 translates to ≥ n 1∕2 . Let t ∶= n −1 ∑ v∈V n S t (v) denote the average of the states of the vertices in V n by step t. Benjamini et al. [2] conjectured that if → ∞, the quantities ( 2 t ) t≥0 increase with high probability by a factor of . More specifically, their heuristic is that 2 t+1 ≳ ⋅ 2 t , as long as ⋅ 2 t ≤ 1. The heuristic is based on the assumption that redistributing the states of the vertices in every step does not alter the outcome significantly. More precisely, at the beginning of each step the state of every vertex follows an independent {−1, +1}-valued random variable with expectation t . Then for a vertex v which has neighbors the sum of the states of the neighbors behaves like N( t , (1− 2 t )). Ignoring the possibility that there is no clear majority within the neighborhood the probability that v will have state sgn( t ) in the following step is roughly As the S 0 (v)s are independent and identically distributed on {−1, +1}, we have E[ 2 0 ] = 1 n . Hence, it is expected that the sequence 2 1 , 2 2 , … scales like n , 2 n , … until t ≈ n. Thereafter, almost unanimity is reached in one more step, whereas one final step is required to arrive to complete unanimity.
The proof of Theorem 1.1 is inspired by this heuristic. More precisely, the proof consists of two major parts, each consisting of two steps. In the first part (Lemma 3.3) we show that with probability close to 1 almost every vertex adopts the state of the initial majority. Afterwards in the second part (Lemma 4.1) we prove that after two more steps, again with probability close to 1, every vertex will have the same state.
For the first part (Section 3), we will condition on the initial state satisfying | ∑ v∈V n S 0 (v)| ≥ 2c √ n, for some c > 0 such that the probability of this event is at least 1 − ∕4, for n large enough. Then by using the second moment method on X 2 (v) = ∑ u∈N(v) 1(S 1 (u) = +1) we show that in two rounds an arbitrary vertex v will have adopted the initial majority with probability 1 − ∕20, when n is large enough. For the second moment method we need to calculate the expectation (Lemma 3.5) and the variance (Lemma 3.6) of the random variable X 2 (v). Finally, Markov's inequality implies that with probability at least 1 − ∕2 most of the vertices have the same state as the initial majority.
In the second part of the proof (Section 4) we will show that with probability 1 − o(1) (as n → ∞) (over the choices of the underlying graph) if we start with a configuration where all but at most n∕10 of the vertices have state +1, then in two more steps all vertices will be of state +1 (Lemma 4.1). This will rely on an application of the union bound together with sharp concentration inequalities. Hence, the next two rounds will lead to unanimity.

THE FIRST TWO ROUNDS
In this section we will show that an arbitrary vertex will have state +1 after two rounds with probability close to 1, if we condition on an initial state with a sufficient majority of +1s. Due to symmetry this also holds for the state of a vertex to be −1 after two rounds, when the initial state has a sufficient majority of −1s.
In order to achieve this we will first expose the initial state of every vertex and only start exposing the edges afterwards. The following lemma ensures that after exposing the initial state of the vertices one of the two states will have a sufficient majority; the proof can be found in Section 3.1.
when n is large enough.
Throughout this section we will condition on the above event. Due to symmetry, we only need to consider the case when the initial state has a majority of +1s. Therefore, we actually condition on the event Lemma 3.2 (First two rounds).
when n is large enough.
Roughly speaking, increasing the majority of +1s in the initial state should increase the probability that a vertex has state +1 in any step of the process. This in turn should imply that if we can establish a lower bound on the probability of S 2 (v) = +1 conditional on the smallest possible value of ∑ u∈V n S 0 (u) satisfying √ n, then this will also hold when conditioning on the whole range. We establish the lower bound in Lemma 3.3 and apply it to prove Lemma 3.2. Since ∑ u∈V n S 0 (u) takes only integer values the smallest possible value of ∑ u∈V n S 0 (u) satisfying , but we also need to take into account that ∑ u∈V n S 0 (u) only takes integer values between −n and n which have the same parity as n. To reflect this, we let (x) denote the equivalent of the ceiling function, taking the mod 2 equivalence into account; formally we define (x) ∶= min k∈Z {k ≥ x and k ≡ n mod 2}. Thus, in Lemma 3.3 we should condition on the event that ∑ u∈V n S 0 (u) = (2c √ n). But to ease notation, we will omit the function for the remainder of the paper.
Let  c denote the event that ∑ when n is large enough.
We defer the proof to Section 3.2. Now we can prove Lemma 3.2.

Proof of Lemma 3.2. The result follows from Lemma 3.3 once we show
for every integer −n ≤ k ≤ n − 2 with k ≡ n mod 2. For some fixed −n ≤ ≤ n with ≡ n mod 2, letr be an arbitrary initial configuration compatible with ∑ u∈V n S 0 (u) = . As we have not exposed any edges until this point, due to symmetry, we have Now selectr k andr k+2 in such a way thatr k+2 can be created fromr k by changing the initial state of one vertex from −1 to +1. Note that in any fixed graph on V n , changing the state of vertices from −1 to +1 in a given step, by the monotonicity of the sgn function, can only result in changing the state of vertices from −1 to +1 in the following step. Using this argument repeatedly for the first two steps implies (1) and the result follows. ▪ The remainder of this section is structured as follows. We prove Lemma 3.1 in Section 3.1. Then we prove Lemma 3.3, subject to two technical lemmas (Lemmas 3.5 and 3.6) in Section 3.2. Finally we prove Lemmas 3.5 and 3.6 in Sections 3.3 and 3.4, respectively.
Before proceeding with the proof of Lemma 3.1, we introduce auxiliary notations, which will be used in Sections 3.2 to 3.4. Starting with the proof of Lemma 3.3 in Section 3.2, we consider an initial configurations 0 compatible with  c , and condition on the event  0 ∶= {S 0 =s 0 }, where S 0 consists of S 0 (u) for every u ∈ V n . Now explore the neighborhood of v, which we denote by N(v). We also condition on the event  Γ (v) ∶= {N(v) = Γ} for some fixed set Γ ⊆ V ⧵ {v}. With abuse of notation we write  0 ∩  Γ (v) for the intersection of these events. Throughout Sections 3.2 to 3.4, we will work on the conditional space  0 ∩  Γ (v).

3.1
The initial state: proof of Lemma 3.1 Lemma 3.1 is a consequence of the following local limit theorem for a binomial random variable of the form Bin(k, q(k)) as we shall see below.
Theorem 3.4 (Local limit theorem). There exists an absolute constant such that for every positive integer k and every function 0 < q(k) < 1 the random variable X ∼ Bin(k, q(k)) satisfies .
Note that Theorem 3.4 is about the distribution of the sum of k independent Bernoulli-distributed random variables whose parameters may depend on k. It is a generalization of a classical result on a local limit theorem for partial sums of infinite sequences of independent random variables variables (e.g., Theorem 4 in Chapter VII from [9]). Its proof can be found in Section 6.
2 ∕20. By Theorem 3.4 with X = Bin (n, 1∕2), and as Var [ where the last inequality holds for sufficiently large n. ▪

The first two steps: proof of Lemma 3.3
Recall that for the remainder of the section we work on the conditional space  0 ∩  Γ (v). In particular, we consider the family {S 1 (u)} u∈N(v) , conditional on {S 0 =s 0 }, wheres 0 is compatible with  c , and a certain realization of N(v). To derive Lemma 3.3, we will show that, uniformly over the choice ofs 0 and In particular, we will apply a second moment argument on the random variable To this end, we obtain bounds on the expectation and the variance of X 2 (v).

Lemma 3.5. There exists a constant (independent of ) such that for large enough n and any
Lemma 3.6. Let be as in Theorem 3.4

. Then for any
We defer the proof of these two lemmas to Sections 3.3 and 3.4, respectively. We now prove Lemma 3.3 using them. In the following proof as well as later, we will use where is a large constant independent of .
Proof of Lemma 3.3. Let ′ = max{96 2 , 8} + 1. By Lemma 3.5 we have for any Chebyshev's inequality implies that when is large enough. In particular this implies that and as this holds for any Note that (by a standard Chernoff bound) the event  (v) holds with probability 1 − exp(−Θ( 1∕3 )) ≥ 1 − ∕40, for large enough n, and that it is independent of  0 . Therefore, we obtain

3.3
The expectation of X 2 (v): proof of Lemma 3.5 Recall that throughout this section we are working on the conditional space  0 ∩  Γ (v). So to ease notation we will drop this conditioning from the probabilities in this as well as in the next section. Consider the set V n ⧵ {v, u} and split it into three parts V + , V − , and V ++ such that V + ∪ V ++ is the set of vertices with initial state +1, while V − is the set of vertices with initial state −1 and in addition |V as the latter is the probability that S 1 (u) = +1 under the assumption thats 0 (u) =s 0 (v) = −1.
For brevity, we set We will bound (3) from below, conditioning on the value of n ++ (u), and we are going to consider several cases depending on the range of this value.
Note that n + (u), n − (u) ∼ Bin and n ++ (u) ∼ Bin . Thus The proof hinges on ++ (u) being large enough. This is achieved by requiring the average degree to be large enough, in fact this is the only point in the proof of Lemma 3.2 where this condition is required. By the choice of = max{c −2 , 2 } for n sufficiently large we have We write where We first derive a lower bound on Σ 0 .
Claim 3.7. We have Proof of Claim 3.7. Recall that n + (u), n − (u) are identically distributed. Therefore, for any integer we can write This can be re-written as Also, note that for ≥ 0 we have Thereby, we can write for some s > 0 when ≥ 0, whereby Using these, we can write But s > 0 and when 0 ≤ < 2 (or equivalently ∈ {0, 1}), for large enough by (4) we have 2 + < ++ (u). Thus P [ n ++ (u) = 2 + ] > P [ n ++ (u) = 1 − ], whereby we conclude that the second summand is positive. Hence for 0 ≤ < 2 we have, Now, we pair up the four terms of Σ 0 (for k ∈ {0, 1, 2, 3}) using the value of . In particular, = 0 corresponds to k ∈ {1, 2}, and = 1 corresponds to k ∈ {0, 3}. In other words, we write which concludes the proof of the claim. ▪ To obtain a lower bound on Σ 1 , we use the following simple fact that for any integer k ≥ 0 To see this, note that since the result follows if P [ n + (u) + k ≥ n − (u) ] > P [ n + (u) + k < n − (u) ]. Since n + (u) and n − (u) are identically distributed, But also since k ≥ 0, we have and (7) follows. Therefore, we have An analogous argument implies We now turn to Σ 2 , and start by providing a bound on P [ n + (u) + ≥ n − (u) ].
Claim 3.8. When n is large enough, for every with ++ (u)∕2 − 2 < ≤ 2 ++ (u) − 2 there exists a constant independent of such that Proof. We start by considering the case when > n∕(1 + c 2 ∕162). Note that under this assumption and by (4) we have, for large enough n, When n is large enough, we have ++ (u)∕2 − 2 ≥ ++ (u)∕4. Since n + (u) and n − (u) are identically distributed and independent, for any ≥ ++ (u)∕2 − 2 ≥ ++ (u)∕4 we have where the last step follows from the Chernoff bound. Together with (10) this implies for large enough . The claim follows as ( 1 − e −2 ) 2 > 1∕2 and ≤ 2 ++ (u) ≤ √ . Now assume ≤ n∕(1 + c 2 ∕162). Note that in this case, when n is large enough, For any positive integer we write By (7), we obtain This together with (12) gives To bound the terms of the sum from below, we condition on the value of n − (u) to obtain Note that both n + (u) and n − (u) are binomially distributed with the same parameters. By Theorem 3.4 there exists ′ > 0 such that for any s ∈ [E [ n + (u) ] ± 2Var [n + (u)] 1∕2 ] and for any i = 1, … , , where ≤ 2 ++ (u) (11) ≤ 84Var [n + (u)] 1∕2 , we have Therefore, since n + (u), n − (u) ∼ Bin , there exists > 0 such that where the last inequality follows from Chebyshev's inequality. Together with (13), for any such we have ▪ Now we will use Claim 3.8 to derive a lower bound on Σ 2 : We will show that the second sum is close to ++ (u), which is (1 + o(1))2c ∕ √ n by (4). This will imply that the second summand is of order c √ ∕n. Clearly, we have ] .
By the Chernoff bound we have for large enough So for large enough ∑ Substituting this into (14) we deduce that Therefore, Claim 3.7, (8), (9), and (15) in (5) give By (3) and because n ++ is a nonnegative integer we have Now, − 2∕3 > 6 ∕7 for n sufficiently large. So (16) yields completing the proof of Lemma 3.5.

3.4
The variance of X 2 (v): proof of Lemma 3.6 We will now bound the variance of Recall that due to the conditioning we have revealed the initial state of every vertex and the edges adjacent to v, however we have not examined any further edges so far.
We let I u = 1(S 1 (u) = +1), for all u ∈ N(v) and write Let E denote the edge set of G(n, p).
Proof of Claim 3.9. Consider first two distinct vertices u, u ′ ∈ N(v). We have The first term of (18) can be rewritten as by the law of total probability. We further have because the events {I u = 1} and {I u ′ = 1} depend only on the edges that are incident to u and u ′ , respectively. This is the case, as we are working on the conditional space where the initial state of the vertices has been realized and the states of u and u ′ after the first round depend only on the edges that are incident to these two vertices. Thus, if we condition on the status of the pair uu ′ , that is, whether it is an edge or not, then the events {I u = 1} and {I u ′ = 1} are independent. Thus, the first term of (18) becomes Furthermore, by the law of total probability, the probabilities in the second term of (18) can be written as Thus, the second term of (18) becomes ) . (20) To ease notation, letting and plugging (19) and (20) into (18), we obtain as claimed. ▪ Next we will estimate |P [ First observe that the event {I u = 1} on either of the two conditional spaces (i.e., {uu ′ ∈ E} or {uu ′ ∉ E}) is a function of the same collection of independent Bernoulli-distributed random variables, namely the indicators of uu ′′ ∈ E, for any u ′′ ≠ u ′ . However, the functions that determine {I u = 1} that are associated with the conditional spaces differ only slightly.
We shall rely on the following claim.
The result follows as .

▪
We will apply the above claim in our setting in order to express the event {I u = 1}. We set I as the set of vertices in V n ⧵ {u, u ′ , v} with initial state +1, while I ′ is the set of vertices in V n ⧵ {u, u ′ , v} with initial state −1, and for each i ∈ I ∪ I ′ the random variable Y i is the indicator that the corresponding edge exists. Setting a = S 0 (v)−1(S 0 (u) = −1) when S 0 (u ′ ) = +1 and a = S 0 (v)−1(S 0 (u) = −1)+S 0 (u ′ ) when S 0 (u ′ ) = −1, Claim 3.10 implies that . Now, note that ∑ i∈I ′ Y i follows the binomial distribution as a sum of n∕2 − (1 + o(1))c √ n Bernoulli trials each having success probability ∕n.
Next we will distinguish between the cases p ≤ 1 − 24 2 n −1 and 1 − 24 2 n −1 < p ≤ 1, starting with the former. The Local Limit Theorem (Theorem 3.4) implies that and thus An analogous argument implies Thus, (21) and (22) in (17) yield uniformly for all pairs u, u ′ ∈ N(v). Since N(v) < 2 , for n sufficiently large, and the variance of an indicator random variable is at most 1∕4 we then deduce that Now when p > 1 − 24 2 n −1 , since the difference of any two probabilities is at most 1, we have by (17) that This completes the proof of Lemma 3.6.

THE LAST TWO ROUNDS
In the following lemma we show that if one starts the majority dynamics process from any configuration where the number of −1s is at most n, for some small enough, then in two subsequent rounds unanimity will be achieved. Proof of Lemma 4.1. Let P i and N i denote the set of vertices in +1 and −1, respectively, after i rounds. Consider a partition of V n into two sets P 0 , N 0 such that |P 0 | ≥ n(1 − ). Suppose that the majority dynamics starts with all elements of P 0 in state +1 and all elements of N 0 in state −1. Note that until this point we have only fixed the states of the vertices in the graph, but we have not exposed any edges so far. We will show that with probability 1 − o(1) we have |N 1 | < ∕10. In order to achieve this, we bound the probability that every vertex in a set of size ∕10 has state −1 after the first step and apply a union bound.
For a subset of vertices W we denote by {W → N 1 } the event that after the first round all vertices in W will have state −1.
We start by providing an upper bound on P [ W → N 1 ] for each W ⊂ V n with |W| = ∕10. For a vertex v ∈ V n we let S (v) denote its degree inside a subset of vertices S. This random variable is binomially distributed with parameters |S| and ∕n. Note that if {W → N 1 }, then for every v ∈ W we have P 0 (v) ≤ N 0 (v). Thus, we have the following upper bound: .
The latter event is the intersection of independent events. For each one of them, we have ] .
By the Chernoff bound, the first probability is e −Ω( ) . On the other hand ] ≤ ∕10 and the Chernoff bound again implies that P whereby there exists 1 > 0 such that for n sufficiently large, we have For such n, the union bound implies that the probability that there exists a set W of size ∕10 such that {W → N 1 } holds is at most Summing over all partitions of V n whose number can be crudely bounded by 2 n , the union bound implies that if is sufficiently large, then the probability that there exists a subset W is size ∕10 which becomes negative after one step is o (1). Note that this is the only part of the proof which uses the condition on . For the subsequent round, note that with probability 1 − o(1), all vertices of G(n, p) have degrees at least ∕2. So if |N 1 | < ∕10, it turns out that after the execution of the first step all vertices will have the majority of their neighbors having state +1. Thus, the next round leads to unanimity. provided that n is sufficiently large. Conditional on this event, with probability 1∕2 we have ∑ v∈V n S 0 (v) ≥ 2c √ n. Let us assume that this event is realized. For the complementary case the proof is analogous.
Let P 2 ∶= {v ∶ S 2 (v) = +1}, that is, P 2 is the set of vertices whose state is +1 after the first two rounds. Let N 2 be the complement of this set. Lemma 3.3 implies that So, by Markov's inequality, we have Finally, by Lemma 4.1, if n is sufficiently large, then with probability at least 1− ∕4 the random graph G(n, p) is such that after two more rounds unanimity will be reached. Thus, the union bound implies that with probability at least 1 − unanimity is reached after four rounds and concludes the proof of Theorem 1.1.

LOCAL LIMIT THEOREM: PROOF OF THEOREM 3.4
We will use the following results in order to prove Theorem 3.4. Let R + denote the set of positive real numbers.
Theorem 6.1 (Theorem 6 in Chapter I of [9]). Let the random variable X have lattice distribution, with possible values of the form a + kh for some a ∈ R, h ∈ R + , and any k ∈ Z. Then, where f (t) is the characteristic function of X, that is, In particular, by taking a = 0, h = 1 in Theorem 6.1 we have for every integer-valued random variable X and k ∈ Z that where f (t) is the characteristic function of X.
We also require a version of the Berry-Esseen theorem. Lemma 6.2 (see e.g., Lemma 1 in Chapter V of [9]). Let X 1 , … , X n be independent random variables with < ∞ for j = 1, … , n. In addition, let X = ∑ n j=1 X j and Denote byX the normalized version of X, that is,X = (X To bound (27), set Since for each j = 1, … , n and because In addition, 2 = ∑ n i=1

DISCUSSION
In this paper we analyze the evolution of majority dynamics on G(n, p) with p = ∕n for = (n) ≥ n 1∕2 . Our main result is the proof of a conjecture of Benjamini et al. [2] in which majority dynamics on such a random graph will become unanimous in at most four steps with probability arbitrarily close to 1, provided that the initial state is selected uniformly at random and and n are sufficiently large.
Of course, a natural question is how majority dynamics evolves on a random graph of smaller (average) degree . Benjamini et al. made the following general conjecture.