Optimal Rank‐1 Hankel Approximation in the Spectral Norm for Matrices with Multiple Largest Eigenvalue

We extend a result from [2, 3] on the optimal rank‐1 Hankel approximation of real symmetric matrices with isolated largest eigenvalue to the case where the largest eigenvalue is not isolated. To illustrate our findings, we give an example where the optimal rank‐1 Hankel approximation is easily obtained and one where it does not exist.


Introduction
A real square Hankel matrix H 1 ∈ R N ×N of rank 1 is of the form where R := R ∪ {∞}. Note that z(∞) = lim z→∞ z(z) = (0, . . . , 0, 1) T is the last vector of the standard basis in R N and therefore finite. With the characterization (1), the problem of approximating a given real symmetric matrix A ∈ R N ×N by a Hankel structured matrix of rank 1 in the spectral norm reads Generalizing a result from [1], in [2] we have solved this problem for real symmetric matrices A whose largest eigenvalue is bounded away from the modulus of the second largest eigenvalue, see also [3]. Here we consider real symmetric matrices A with multiple largest modulus of eigenvalues thereby further extending the result of [1] and our results from [2,3].
Thenc chosen as where first/second case refers to the left-/right-hand side of (3), ensures that the optimal error bound is attained byc ·zz T .
P r o o f. We employ a similar proof technique as for Thm. 4.5 in [2]. The crucial point is that A −c ·zz T 2 = λ 0 if and only if both of the auxiliary matrices M 1 (λ 0 ) := λ 0 I − Λ +c · V Tz · Vz T and M 2 (λ 0 ) := λ 0 I + Λ −c · V Tz · Vz T are positive semidefinite (psd) and at least one of them actually possesses the eigenvalue zero. In the first case of (3), M 1 (λ 0 ) is psd for any z andc > 0. M 2 (λ 0 ) is psd if and only if v T jz = 0 for all j with λ j = −λ 0 . The particular choice ofc ensures that M 2 (λ 0 ) possesses the eigenvalue zero. In the second case of (3) the roles of M 1 (λ 0 ) and M 2 (λ 0 ) are interchanged.

Remark 2.2
The precise choices ofc given in Thm. 2.1 are sufficient but not necessary. The optimal error bound being attained only implies thatc is in the range between zero and the respective value from (4). This is because we cannot determine whether the matrix M 1 (λ 0 ) in the first case, or M 2 (λ 0 ) in the second case of (3) possesses the eigenvalue zero or is in fact strictly positive definite. If the positive (respectively negative) largest eigenvalue actually has higher multiplicity (e.g. λ 0 = λ 1 > 0) then M 1 (λ 0 ) (respectively M 2 (λ 0 )) does have the eigenvalue zero and we can choosec in the range between zero and the respective value of (4). This observation contributes to Cor. 2.3, and is illustrated in Ex. 3.1.
The conditions onz from Thm. 2.1 are trivially satisfied for anyz ∈ R if all by modulus largest eigenvalues of A occur with the same sign, see Ex. 3.1.

Corollary 2.3
Assume that all eigenvalues λ j of A with |λ j | = ∥A∥ 2 = |λ 0 | have the same sign, and λ 0 > 0. Then for everyz ∈ R withc chosen in the range the matrixc ·zz T is an optimal rank-1 Hankel approximation of A attaining the optimal error bound.
corresponding to the ordered eigenvalues λ 0 = 11, λ 1 = 11, λ 2 = 1. According to Cor. 2.3, for anyz ∈ R we compute the upper bound onc from (5). Each pair (c,z) in the grey area of Fig. 1 admits the optimal error bound A −c ·zz T 2 = 11. If neither of the conditions in (3) can be satisfied for anyz ∈ R then there is no real Hankel approximation of true rank 1 optimally approximating the matrix A, but only the trivial solution of a zero-matrix, as the following exemplifies.
denoted v 0 through v 4 , respectively, and corresponding eigenvalues λ 0 = λ 1 = λ 2 = 1 and λ 3 = λ 4 = −1. Then neither of the two systems of equations v T 3 z = 0 ⇔ z + z 3 = 0 v T 4 z = 0 ⇔ 1 + z 4 = 0 corresponding to the first and second set of conditions in (3), respectively, has a joint solution in R. Note that neither z(∞) = (0, 0, 0, 0, 1) T is a solution to either of the systems. So in this example, a solution to problem (2) of true rank 1 does not exist.