Generic Classification and Asymptotic Enumeration of Dope Matrices

For a complex polynomial $P$ of degree $n$ and an $m$-tuple of distinct complex numbers $\Lambda=(\lambda_1,\ldots,\lambda_m)$, the dope matrix $D_P(\Lambda)$ is defined as the $m \times (n+1)$ matrix $(c)_{ij}$ with $c_{ij} =1$ if $P^{(j)}(\lambda_i)=0$ and $c_{ij}=0$ otherwise. We classify the set of dope matrices when the entries of $\Lambda$ are algebraically independent, resolving a conjecture of Alon, Kravitz, and O'Bryant. We also provide asymptotic upper and lower bounds on the total number of $m \times (n+1)$ dope matrices. For $m$ much smaller than $n$, these bounds give an asymptotic estimate of the logarithm of the number of $m \times (n+1)$ dope matrices.


Introduction
Let P ∈ C[x] be a polynomial of degree n, and let Λ = (λ 1 , . . ., λ m ) be an m-tuple of distinct complex numbers.Following Alon, Kravitz, and O'Bryant [1], we define the dope matrix of P with respect to Λ as the m × (n + 1) matrix given by [0,n] where c ij = 1 if P (j) (λ i ) = 0 0 otherwise.
Hence, the dope matrix tracks the pattern of common zeroes between P and its derivatives -that is, the set of ordered pairs (i, j) for which we have P (j) (λ i ) = 0.A matrix is called dope if it is of the form D P (Λ) for some P and Λ. Denote by D m n the set of m × (n + 1) dope matrices.
Call Λ affinely algebraically independent if it has no nontrivial affine algebraic dependences.Using (*), we can show (see Theorem 1.7) that D n (Λ) is the same for any affinely algebraically independent m-tuple Λ.We define D gen(m) n := D n (Λ) for any choice of affinely algebraically independent Λ. (The notation D gen(m) n is motivated by the fact that a generic m-tuple Λ is affinely algebraically independent.)We remark that D gen(m) n is natural to consider, as a generic m-tuple Λ is affinely algebraically independent.In a related direction, Alon, Kravitz, and O'Bryant [1, Theorem 6]  m+n by applying a theorem of Rónyai, Babai, and Ganapathy [5, Theorem 1.1] regarding the number of zero-patterns of general sequences of polynomials.We improve this bound by directly applying the methods of [5] to the question at hand.We also find a lower bound on |D m n |.For m ≤ n 2 +n 2 , we have the following:

2
, we have Standard asymptotic notation, such as the o(1) in the above theorem, is used throughout this paper and is explained in Section 1.1.We also freely make statements regarding the asymptotic behavior of mn n , which will follow from the asymptotic estimates of log mn n in Section 1.1.
The lower bound in Theorem 1.3 comes from constructing a large set of elements of D m n .One possible construction is to consider only the case when Λ is generic, which gives a lower bound of log D gen(m) n .Another construction comes from taking a "generic" polynomial P .Consider a polynomial P such that no two derivatives of P have a common root and no derivative of P has a root of multiplicity more than 1.If we construct an element of D m n corresponding to the polynomial P one row at a time, we have around n choices for each row.Hence, this construction gives a lower bound of around log(n m ).
If m = o(n), then only the log mn n term of the lower and upper bounds matter asymptotically.In this regime, the bounds essentially match (see Theorem 3.1 for a more precise statement) and D gen(m) n accounts for the lower bound.When m = ω(n), only the log(n m ) term in the lower bound matters, and hence the generic P construction accounts for the lower bound.The construction for the m = Θ(n) case is a combination of the generic Λ construction and the generic P construction.
In Section 4, we focus on the m > n 2 +n 2 regime.As P (j) has at most n − j roots for each j, any dope matrix can have at most n 2 +n 2 nonzero rows.Hence, the growth rate of D m n in m decreases after m passes the threshold of n 2 +n 2 .Beyond this threshold, we have the following results: When n is fixed, |D m n | is a polynomial in m, and we can compute the two leading terms:

Preliminaries and Notation
We now check that D gen(m) n is well-defined.
Proof.We have that for all polynomials P ∈ Q[x 1 , . . ., x m ] \ {0}, the polynomial P (a + bΛ) ∈ C[a, b] is not identically zero.We want to show that there exist a, b ∈ C such that P (a + bΛ) = 0 for all View each Q ∈ S as a polynomial in b with coefficients being polynomials in a.For some k, the coefficient of b k in Q is some nonzero polynomial R in a.For any a with R(a) = 0, we have that Q(a, b) has finitely many roots as a polynomial in b.For each Q ∈ S, pick some such polynomial R, and let A Q be the set of roots of R. Since each A Q is finite, we have that Q∈S A Q is countable, and hence, we can pick some a 0 ∈ C that isn't in A Q for any Q ∈ S.
We claim that for some b 0 ∈ C, we have Q(a 0 , b 0 ) = 0 for all Q ∈ S. Let B Q be the set of roots of Q(a 0 , b) viewed as a polynomial in b.By the definition of a 0 , we have that each B Q is finite, so Q∈S B Q is countable.Hence, there is some b 0 ∈ C that isn't in B Q for any Q ∈ S, as desired.
We'll also recall a fact from [4] allowing us to equate D n (Λ) and D n (Λ ′ ) when we have linear maps or Q-automorphisms of C sending Λ to Λ ′ .
The idea of the proof is, for any polynomial P (x) = a n x n + a n−1 x n−1 + • • • + a 0 , to consider the polynomial This polynomial has the property that D P (Λ) = D P a+bϕ (a + bΛ) for all P .A more complete explanation can be found in [4].
We are now ready to check that D gen(m) n is well-defined.
Theorem 1.7.If Λ 1 and Λ 2 are affinely algebraically independent m-tuples of complex numbers, then Proof.Let Λ be an algebraically independent m-tuple of complex numbers.By Lemma 1.5, there are constants a 1 , b 1 , a 2 , b 2 ∈ C such that a 1 + b 1 Λ 1 and a 2 + b 2 Λ 2 are algebraically independent.Now, there exists some Q-automorphism ϕ of C sending a 1 + b 1 Λ 1 to a 2 + b 2 Λ 2 (see the introduction of [1] for further discussion of this fact).Hence, as desired.
To state our results on asymptotic dope matrix counts, we will use standard asymptotic notation.For functions f, g : Z → R, we can compare the growth rates of f and g with the following: 1. f = O(g) if, for some constant K, we have |f | ≤ K |g| for all sufficiently large t; 2. f = Θ(g) if, for some constants K, K ′ , we have K |g| ≤ |f | ≤ K ′ |g| for all sufficiently large t; 3. f ∼ g if, as t approaches ∞, we have that f g approaches 1; 4. f = o(g) if, for any constant ε, we have |f | ≤ ε |g| for all sufficiently large t; 5. f = ω(g) if, for any constant M , we have |f | ≥ M |g| for all sufficiently large t.
We will frequently need to analyze the asymptotic behavior of n! and log mn n .We will make use of the following inequalities: 1.For any positive integer n, we have n log n − n ≤ log n! ≤ n log n.

Generic Dope Matrices
In this section we will prove Theorem 1.1.We first outline the proof of the m = 2 case from [1, Section 4], as our proof uses many ideas from it.It makes use of the following result (slightly rephrased) of Gessel and Viennot.
Outline of Proof of Theorem 1.1 when m = 2. Take Λ = (0, 1), and let For a 2 × (n + 1) matrix M , we can view the equation D P (Λ) = M as a system of linear equations in a n , . . ., a 0 via When M does not satisfy the property in Theorem 1.1, for some k, it has at least k + 1 ones in the last k + 1 columns.Take the minimum k with the preceding property and look at the last k + 1 columns.The resulting linear equations are linearly independent by Theorem 2.1.
In these linear equations, only the k + 1 variables a n , a n−1 , . . ., a n−k have nonzero coefficients, so we must have However, if a n = 0, then P cannot be a degree-n polynomial, which gives a contradiction.
To show that any matrix M with at most k ones in the last k + 1 columns is attainable, we add all-one columns to the left end of the matrix until the number of ones is one smaller than the number of columns.Say we added c columns, and let M ′ be the matrix with the c columns added.Take the resulting system of equations and append a n+c = 1, ensuring that our desired polynomial has degree exactly n + c.
By Theorem 2.1, the resulting equations are linearly independent, and hence have a unique solution.
Letting P 0 be the polynomial corresponding to this solution, we have that D P0 (Λ) is not the zero matrix, since P (n+c) (0) = 0. Hence, D P0 (Λ) must have at most n + c ones, and is hence exactly We now introduce notation that will be helpful in our proof of Theorem 1.1 Definition 2.2.Call a matrix safe if the last k + 1 columns contain at most k nonzero entries for all k.Similarly, call a matrix almost-safe if the last k + 1 columns contain at most k + 1 nonzero entries for all k.
Throughout this section, we will let Note that the set of such P forms a complex vector space of dimension n + 1 with basis given by 1, x, . . ., x n .Hence, for a fixed integer s and complex number t, we can view P (s) (t) as a linear form in P : We will use Λ = (λ 1 , . . ., λ m ) and S = (S 1 , . . ., S m ) to denote arbitrary m-tuples of complex numbers and m-tuples of subsets of {0, 1, . . ., n}, respectively.Denote by P (S) (Λ) the set of linear forms P s (λ i ) for i ∈ [m] and s ∈ S i , and denote by M (S) the matrix , where c ij is 1 if j ∈ S i and 0 otherwise.

Reduction to the Key Lemma
The key lemma is the following linear independence property: is almost-safe and Λ is generic, then P S (Λ) is linearly independent.
Before proving this lemma, we will show how it implies Theorem 1.1.The proof is similar to the proof of the m = 2 case in [1], with Lemma 2.3 replacing the result of Gessel-Viennot.
Proof of Theorem 1.1.First, we show that being safe is necessary.Let Λ be a generic m-tuple.Suppose for some S such that M (S) is not safe, there exists a polynomial P such that D P (Λ) = M (S).
Take the smallest k such that the last k + 1 columns contain at least k + 1 nonzero entries.The linear forms corresponding to these entries are linear functions of a n , a n−1 , . . ., a n−k and are linearly independent by Lemma 2.3.Hence, a n = 0, contradicting the assumption that P is degree-n.
To show that the condition is sufficient, given any safe matrix M , we prepend columns where the top two entries are 1 and the remaining entries are 0 such that the resulting matrix M ′ has exactly n + c nonzero entries, where c is the number of prepended columns.We have that M ′ is safe.
Let S be such that M (S) = M ′ .By Lemma 2.3, P S (Λ) consists of n + c independent linear forms.Appending the linear form P (n+c) (λ 1 ) corresponds to adding a one to the last column of M ′ , which makes M ′ remain almost-safe, so by Lemma 2.3, P S (Λ) ∪ {P (n+c) (λ 1 )} is a set of n + c + 1 independent linear forms.Hence, there is some nonzero polynomial P 0 with degree at most n + c such that P (n+c) 0 (λ 1 ) = 1, and for all i ∈ [m] and s ∈ S i , we have (λ 1 ) = 1, we have that P 0 has degree exactly n + c.
We next show that we cannot have 0 (λ i ) = 0, then P 0 is zero on every element of P S ′ (Λ).However, M (S) is almost-safe, so P S ′ (Λ) contains n + c + 1 linearly independent linear functions by Lemma 2.3.This forces P 0 to be the zero polynomial, which is a contradiction.Thus, D P0 (Λ) is exactly M ′ , and hence D P (c) 0 (Λ) is the desired matrix.

Demonstration of Proof Technique
As the proof of Lemma 2.3 is fairly technical, we provide a demonstration of the proof for a small almost-safe matrix.
Consider the S corresponding to the following 3 × 6 matrix, which is almost-safe: We will show that if Λ = (0, 1, t) for some transcendental t, then P S (Λ) is linearly independent.
We want to show that for generic t, the linear forms P (0), P ′ (0), P (4) (0), P (3) (1), P (t), P (4) (t) are linearly independent.It suffices to show that the result holds for at least one transcendental t, as we can then take a Q-automorphism mapping t to any transcendental number of our choosing.The key idea is to check the special case of t very close to 0.
These linear forms correspond to the 2 × 6 almost-safe matrix 1 1 1 0 1 1 0 0 0 1 0 0 , and hence are linearly independent by the m = 2 case of the theorem.By the fact that linear independence is equivalent to nonzero determinant, we can extend the linear independence of P (0), P ′ (0), P (4) (0), P (3) (1), ) (t) − P (4) (0) t from t = 0 to all t in some neighborhood of 0. Hence, they are linearly independent for some transcendental t, implying the desired result.
One can view the argument above as showing that we can combine the roots at 0 and t, and hence combine the corresponding rows of the matrix.
The proof in the general case consists of two parts.First, we prove a claim generalizing the choice of linear combinations in the above proof.We then generalize the t → 0 argument, allowing us to combine the λ i 's under certain conditions.Once this is done, Lemma 2.3 follows from repeatedly combining the λ i 's.

Derivatives as Linear Combinations
We first, using Theorem 2.1, find linear combinations that limit to derivatives.For every polynomial P , there exists a polynomial Q ∈ C[λ, ε] such that holds for all λ, ε ∈ C.
We will view the a i as variables.We can write P (s) evaluated at λ and λ + ε in the basis of the a i as follows: Hence, the right-hand side of ⋆ contains only linear combinations of For any choice of the c s,2 's, we can pick Q to make the coefficient of ε t−d a t zero for t > d, and we can pick c t,1 to make the coefficient of ε t−d a t zero for t ∈ S 1 .Hence, it suffices to pick c s,2 such that, for all t ∈ [0, d] \ S 1 , the coefficient of ε t−d a t in the right-hand side of ⋆ matches the corresponding coefficient in the left-hand side.
Let G = [0, d]\ S 1 and H = S 2 .For all g ∈ G, the coefficient of ε g−d a g in some term in the right-hand side of (⋆) is The column condition implies for all c.Now, by Theorem 2.1, we can find constants c h,2 such that for all g in G, as desired.
This follows from the fact that, in a given complex vector space, linearly independent ℓ-tuples of vectors form an open set.
We now prove the claim allowing us to "combine" λ 1 and λ m .This will be the inductive step in our proof of Lemma 2.3.
Consider the row vector obtained as follows: 1. Add the first and mth rows of M (S), obtaining some vector v.
2. We index the components of v with 0, 1, . . ., n. If, for some j, the jth component of v is at least 2, subtract 1 from the jth component and add 1 to the (j + 1)th component.
3. Repeat the previous step until it cannot be repeated anymore.
This aligns with the intuition of combining the roots of row 1 and row m, since we expect a double root of P (j) to become a root of P (j) and a root of P (j) ′ = P (j+1) .We claim that, no matter which choices are made during step 2, this process will always result in the row vector corresponding to S ′ 1 .
Let v i denote the vector obtained in step 1, and let v f denote the vector obtained at the end of the process.We use v to denote an arbitrary vector at any point in the process.
For each s, consider the quantity C s (v) := max t≤s v • 1 [t,s] − (s − t + 1) , where 1 [t,s] is the row vector that is 1 on the columns [t, s] and 0 elsewhere.During any application of step 2 above, C s (v) is unchanged if j < s, is decreased by 1 if j = s, and is unchanged if j > s.In the j = s case, taking t = s, we must have that C s (v) ≥ 0 after the application of step 2. Hence, C s (v i ) ≥ 0 if and only if Since M (S) is almost-safe, we have C n (v i ) ≤ 0. Hence, we must also have C n (v f ) ≤ 0 -in particular, the nth component of v f cannot be more than 1.By definition, the jth component of v f cannot be more than 1 for any j < n.Thus, all components of v f are either 0 or 1.
We have As a corollary of this alternate characterization, we have , and hence P S (Λ) and P S ′ (Λ ′ ) have the same number of elements.We also have that if M (S) is almost-safe, then so is M (S ′ ), as all steps of the above process preserve the property that the sum of the entries of the last k + 1 columns is at most k + 1 for all k.
We now prove (b).Recall that we use P to denote a general polynomial of degree at most n.We claim that for each s ′ ∈ S ′ 1 , there exist a polynomial Q s ′ , independent of ε but possibly dependent on Λ and P , and constants c s,i , possibly dependent on ε, such that If s ∈ S 1 , the result is clear.Otherwise, take the largest t such that By our assumptions, P S ′ (Λ ′ ), which is   evaluated at ε = 0, is linearly independent.Every linear form in the above set is continuous in ε, and hence the above set is linearly independent for some transcendental ε = 0 by Proposition 2.5.Its span for this ε is a subspace of the span of P S (Λ), and has dimension P S ′ (Λ ′ ) = P S (Λ) .
We can now repeatedly combine elements to prove that all almost-safe matrices are linearly independent.
Proof of Lemma 2.3.We proceed by induction on m.The base case of m = 1 is trivial.The induction step follows immediately from Lemma 2.6.

Enumeration
The enumeration of generic dope matrices follows from a direct application of the cycle lemma.Definition 2.7.We say that a sequence of ones and zeroes is t-dominating if for every ℓ > 0, the number of zeroes among the first ℓ entries is more than t times the number of ones.
The cycle lemma allows us to count the number of t-dominating sequences with a given number of ones and zeroes. .For m = 1, 2, we have the exact formulas 2 n and 2n+1 n , respectively.For larger m, we have the following: Corollary 2.10.For m ≥ 3 and n ≥ 1, we have Proof.We use the formula in 1.2.The lower bound follows by considering the k = n term.For the upper bound, note that the k = n − ℓ term is Summing over 0 ≤ ℓ ≤ n, we have as desired.

D m n for Small m
In this section, we will prove Theorem 1.3.We also remark that, combining our lower bound and upper bound, we have the following asymptotic estimate for |D m n | when m = o(n):

Upper Bound
We will prove the following upper bound.
Let f 1 , . . ., f T be a sequence of polynomials in N variables.
Definition 3.3.We define a zero-pattern to be a subset S of [T ] of the form for some u ∈ C N .
Our proof closely follows the proof of the following result of Rónyai, Babai, and Ganapathy, which provides a bound on the number of zero-patterns of general sequences of polynomials: Theorem 3.4.[5, Theorem 4.1] Let f 1 , . . ., f T be a sequence of polynomials in N variables, where each polynomial has degree at most d.For any t, the number of zero-patterns is at most Our proof uses the key ideas from the proof of Theorem 3.4, with the main difference being that we can estimate the degrees of polynomials more carefully in the specific case of the sequence P (j) (λ i ) .
Call a zero-pattern large if it has size larger than n, and small otherwise.The number of small zero-patterns is at most For each large zero-pattern S k , consider the polynomial We claim that the Q k are linearly independent (this is proven in [5, Theorem 1.1]).Suppose, for the sake of contradiction, that some linear combination k α k Q k is identically zero, where the α k are not all zero.Consider some index ℓ that maximizes |S ℓ | over all ℓ with α ℓ = 0. Evaluating k α k Q k at the (P, Λ) corresponding to the zero pattern S ℓ gives α ℓ = 0, which is a contradiction, as desired.
Hence, the Q k 's are linearly independent.Furthermore, all of the monomials of the Q k 's corresponding to large zero-patterns are in the set This is a set of size mn n , so the Q k 's corresponding to large zero-patterns lie in a space . Hence, we have giving the desired bound.

Lower Bound
We now establish the lower bound on |D m n | from Theorem 1.3.
, we have The idea behind the construction is, for some well chosen a, to start with an a × (n + 1) matrix M ∈ D gen(a) n , to pick P and Λ such that D P (Λ) = M , and then to append m − a elements to Λ.We first prove a claim allowing us to find P such that many distinct rows can be appended to D P (Λ).
Call an m × (n + 1) matrix T -limited if each row has at most T ones.Call an m × (n + 1) safe matrix saturated if it has exactly n ones in total.We let C(m, n, T ) denote the number of m × (n + 1) safe T -limited saturated matrices.Proposition 3.6.Let Λ = (λ 1 , . . ., λ a ) be affinely algebraically independent, and M be an a×(n+1) safe T -limited saturated matrix.Then there is a degree-n polynomial P such that D P (Λ) = M .Furthermore, this polynomial P has the property that for any λ ∈ C, at most T of the entries of D P ((λ)) are one.
Proof.Using Theorem 1.1, we can pick some polynomial P such that D P (Λ) = M .Fix an arbitrary λ ∈ C. We claim that for some b ∈ [a], the a-tuple Λ b obtained from Λ by replacing λ b with λ is affinely algebraically independent.If Λ with λ appended is already affinely algebraically independent, the result is clear.
Otherwise, let Q 0 be a minimum-degree affine algebraic dependence of (λ, λ 1 , . . ., λ a ).We claim that Q 0 divides all algebraic dependences.Suppose, for the sake of contradiction, that Q is another affine algebraic dependence such that Q 0 does not divide Q. Viewing Q 0 , Q as polynomials in the first variable and taking the resultant gives a nonzero polynomial R with R(t 1 + t 1 λ 1 , . . ., t 1 + t 2 λ a ) = 0 for all t 1 , t 2 , contradicting the affine algebraic independence of Λ.Now, if we choose b such that x b appears in Q 0 , we find that Λ b is affinely algebraically independent.For this b, by Theorem 1.1, D P (Λ b ) must have at most n ones, so the number of ones in D P ((λ)) is at most the number of ones in the row of λ b , which is at most T by assumption, as desired.Now, we execute the construction mentioned earlier.1. Pick an algebraically independent a-tuple of complex numbers λ 1 , . . ., λ a .Pick P as in Proposition 3.6 so that D P ((λ 1 , . . ., λ a )) = M .
2. For each a + 1 ≤ i ≤ m, iteratively choose λ i to be any value not equal to any of λ 1 , . . ., λ i−1 so that D P ((λ i )) is nonzero.
We will show that there must be many possible choices for each λ i in Step 2. We have that λ is a root of the degree-(n(n + 1)/2) polynomial n j=0 P (j) of multiplicity t t(t+1) 2 , where the summation is over all lengths of runs of ones, with multiplicity, in D P ((λ)).Since D P ((λ)) has at most T nonzero entries, and x(x+1) 2 is convex, λ is a root of the degree-(n(n + 1)/2) polynomial n j=0 P (j) of multiplicity at most T (T + 1)/2.Note that D P ((λ)) is nonzero if and only if λ is a root of n j=0 P (j) .Hence, the number of possible choices for λ i+1 is at least Since P (j) has at most n roots for any j, there are at most n possible values of λ i+1 corresponding to the same not-all-zero row.Thus, given a choice of λ 1 , . . ., λ i , there are at least 1 where the first inequality follows from the fact that the left-hand side is decreasing in b, and the second inequality is justified in Section 1.1.Taking k = n 2 +n T 2 +T − a and b = m − a, we find that the number of elements in D m n is at least Proof.The number of a × (n + 1) safe matrices with exactly t ones in the first row and n ones in total is at most Here, the first term counts the number of ways to distribute t ones into the first row and the second term counts the number of ways to distribute the remaining ones.Here, we are ignoring the condition given by M being safe, but use the fact that the last column cannot have ones.
We have, for all t ≤ n − 1, For t ≥ 2T /3, the function f decays by a factor of at least 2 when t increases by 1.The number of a × (n + 1) safe T -limited saturated matrices is at least, by Corollary 2.10, as the number of a × (n + 1) {0, 1} matrices with at least t ones in a given row is at most f (t).Since By Vandermonde's identity, we have Hence, which exceeds the desired bound.
We are now ready to prove Proposition 3.5.
Proof of Proposition 3.5.For n where the second inequality follows from Corollary 2.10 and the last estimate of the error term follows from bounds log(n m ) ≤ n/ log n = o(n) and log mn n ≥ log 2n n = Θ(n) .
We use Lemma 3.7 for the remaining regions.For m ≥ n √ log n, take a = n and T = 1, so C(a, n, T ) ≥ 1.We have assumed m ≤ n 2 +n 2 = n 2 +n T 2 +T , so we may apply Lemma 3.7, which gives .For n large enough for the aformentioned inequalities to hold, putting everything together gives which is mn n n m 1+o(1) , as desired.

D m n for Large m
The goal of this section is to prove Theorem 1.4.
Let V (m, n) denote the number of elements in D m n with no all-zero rows.As P (j) has at most n − j roots for each j, the jth column of any element of D m n has at most n − j ones, and so V (m, n) = 0 for all m > n 2 +n 2 .Proposition 4.1.For all m, n, we have Proof.The right-hand side counts |D m n | by conditioning on the number k of not-all-zero rows, since the number of possible submatrices consisting of only those k rows is V (k, n).
We can now conclude the first part of Theorem 1.4.Proof.For the first formula, note that if an element of D n 2 +n 2 n has no all-zero rows, then each row must have exactly one nonzero entry and the jth column must have exactly n + 1 − j ones.We show that all matrices satisfying the aforementioned conditions are in D n 2 +n 2 n .This gives the desired enumeration, as we are then placing n + 1 − j indistinguishable rows for each j ∈ [n].
For the second formula, note that if an element of D with no all-zero rows, and with the bottom row removed; (b) a matrix where the jth column has exactly n + 1 − j ones, some row has a nonzero entry in exactly two columns (say n + 1 − j and n + 1 − j ′ with j ′ > j + 1), and the remaining rows have exactly one nonzero entry each.
The j ′ > j condition can be assumed by symmetry; the j ′ = j + 1 condition arises since P (n−j ′ ) (t) = P (n−j ′ −1) (t) = 0 would imply that t is a multiplicity 2 root of P (n−j ′ ) , giving P (n−j ′ ) more that j ′ roots counted with multiplicity.
We show that such matrices are attainable.The result is clear for matrices in (a).For matrices in (b), applying Proposition 4.3 to the value T = 1, the 1-tuple Λ = (0), and the one-row matrix with ones in columns n + 1 − j and n + 1 − j ′ , we get a suitable polynomial.Now, we count this set of matrices.There are V n 2 +n 2 , n matrices in (a), since the bottom row is uniquely determined by the other rows.The number of matrices in (b) is, conditioning on j and j ′ ,

1 (.
have shown that |D n (Λ)| is maximized exactly when D n (Λ) = D gen(m) n .In Section 2, we analyze D gen(m) n .Nathanson [4, Theorem 2] has characterized D gen(m) n when m = 1, and Alon, Kravitz, and O'Bryant [1, Theorem 1] have characterized D gen(m) n when m = 2.We first generalize the aforementioned results to a complete characterization of D gen(m) n for any m, resolving a conjecture of the latter paper [1, Conjecture 15].Theorem 1.1.For all positive integers m, n, the set D gen(m) n consists of exactly the m × (n + 1) matrices with {0, 1} entries such that for all k ∈ [0, n], there are at most k nonzero entries in the last k + 1 columns.Using this characterization, we are able to enumerate D gen(m) n .Theorem 1.2.The number of elements of D gen(m) n with k ones is n + 1 − k n + When m = 1, this sum simplifies to 2 n .When m = 2, it simplifies to 2n+1 n , matching a result of Alon, Kravitz, and O'Bryant [1, Corollary 2].In Sections 3 and 4, we provide bounds on the size of D m n , giving a partial answer to a question posed in [1, Problem 16].In Section 3, we focus on the m ≤ n 2 +n 2 case.Alon, Kravitz, and O'Bryant [1, Theorem 4] find an upper bound of mn 2

1 m− 1 <
The first asymptotic estimate on log mn n follows directly from Stirling's approximation, and the second follows from the fact that 2 ≤ m m−e for all m > 1.

Lemma 2 . 4 .
Fix d ∈ Z ≥0 .Let S = (S 1 , S 2 ), where S 1 , S 2 ⊂ [0, d].Suppose that in M (S), for all 0 ≤ k ≤ d, there are at most k + 1 nonzero entries in columns [d − k, d], with equality holding for k = d.Then there exist constants c s,1 for s ∈ S 1 and c s,2 for s ∈ S 2 such that the following holds: 1, as otherwise t + 1 would have the same property.Applying Lemma 2.4 to the polynomial P (t) , the degree d = s ′ − t, λ = λ 1 , ε = λ m − λ 1 , and the sets S = ({s − t | s ∈ S 1 , s ≥ t}, {s − t | s ∈ S m , s ≥ t}) gives the desired Q and c s,i .

Theorem 2 ..
8. [2, Cycle Lemma] Let a, b, t be nonnegative integers with a ≥ tb.For any sequence p 1 , . . ., p a+b of a zeroes and b ones, exactly a−tb of the cyclic shifts of the sequence are t-dominating.Proof of Theorem 1.2.To prove the assertion for fixed k, consider the map from (c ij ) i∈[m],j∈[0,n] ∈ D gen(m) n to (m − 1)-dominating sequences a 1 , . . ., a m(n+1) given by c ij ↔ a m(n+1−j)+i .By Theorem 1.1, this is a bijection between elements of D gen(m) n with k ones and m-dominating sequences with k ones.By Theorem 2.8, of the (n+1)m k length-m(n + 1) sequences with k ones and m(n + 1) − k zeroes, n + 1 − k n + 1 (n + 1)m k = (n + 1)m − 1 k − (m − 1) (n + 1)m − 1 k − 1 of them are (m − 1)-dominating, using the convention N −1 = 0, implying the assertion for fixed k.Summing over 0 ≤ k ≤ N gives the desired formula for D gen(m) n .Remark 2.9.When m = 2, the size of D gen(m) n simplifies to 2n+1 n .In this case, a direct counting argument is possible.The earlier map gives a bijection between D gen(2) n and 1-dominating {0, 1} sequences.Deleting the first element of the sequence and treating zeroes as ups and ones as downs, the 1-dominating {0, 1} sequences are in bijection with length-(2n + 1) left factors of Dyck paths, of which there are 2n+1 n From the formula in Theorem 1.2, we can find good closed-form estimates for D gen(m) n

Lemma 3 . 7 .
For integers m, n, a, T with 0 ≤ a ≤ m ≤ n 2 +n T 2 +T , we have|D m n | ≥ C(a, n, T ) •n + 1 e(T 2 + T ) For each a × (n + 1) safe T -limited saturated matrix M , we construct many matrices in D m n whose top a rows are M as follows:

n n 2 +nT 2
+T − i possibilities for the (i + 1)th row.For each a × (n + 1) safe T -limited saturated matrix, this construction gives at least in D m n .For any nonnegative integers b < k, we have

−a,+ 1 an n 1 −
which exceeds the bound required.To prove Proposition 3.5, it remains to analyze the size of the bound in Lemma 3.7.Proposition 3.8.Suppose positive integers a and T with T divisible by 3 satisfy T ≤ n and (a − 2)T /3 > n.Then we have C(a, n, T ) ≥ 1 n a(n + 1)2 −T /3 .

Corollary 4 . 2 .∼ log m n 2 +n 2 .
We have, for m > n 2 +n 2 , m(t), n(t) :Z >0 → Z >0 satisfy m(t), n(t) → ∞, then: (a) if m = Θ(n 2 ) and m > n 2 +n 2 , then log |D m n | ∼ log D if log m = ω(log n), then log |D m n | has all not-all-zero rows, then it must be one of the following: (a) an element of D n 2 +n 2 take T /3 = 10 ⌈log n⌉ 3 and a = m T /3 + 3.For large enough n, we have m ≤ n 2 +n T 2 +T , so we can apply Lemma 3.7.For large enough n, we have: n ) ≤ 3n log n = o(m log n).