**1. Introduction**

In this paper, we are interested in solving the linear response eigenvalue problem (LREP):

$$\mathbf{H}z := \left[ \begin{array}{cc} 0 & M \\ K & 0 \end{array} \right] \left[ \begin{array}{c} u \\ v \end{array} \right] = \lambda \left[ \begin{array}{c} u \\ v \end{array} \right] = \lambda z\_{\text{-}1}$$

where *K* and *M* are *N* × *N* real symmetric positive definite matrices. Such a problem arises from studying the excitation energy of many particle systems in computational quantum chemistry and physics [1–3]. It also known as the Bethe-Salpeter (BS) eigenvalue-problem [4] or the random phase approximation (RPA) eigenvalue problem [5]. There has immense past and recent work in developing efficient numerical algorithms and attractive theories for LREP [6–15].

Since all the eigenvalues of **H** are real nonzero and appear in pairs {*<sup>λ</sup>*, −*<sup>λ</sup>*} [6], thus we order the eigenvalues in ascending order, i.e.,

$$-\lambda\_1 \le \cdots \le -\lambda\_N < \lambda\_N \le \cdots \le \lambda\_1.$$

In this paper, we focus on a small portion of the positive eigenvalues for LREP, i.e., *λi*, *i* = *k*, *k* + 1, ··· , - with 1 ≤ *k* ≤ - ≤ *N* and - − *k* + 1 *N*, and their corresponding eigenvectors. We only consider the real case, all the results can be easily applied to the complex case.

The weighted Golub-Kahan-Lanczos method (**wGKL**) for LREP was introduced in [16]. It produces recursively a much small projection **B***j* = ( 0 *Bj BTj* 0 ) of **H** at *j*-th iteration, where *Bj* ∈ R*j*×*j* is upper bidiagonal. Afterwards, the eigenpairs of **H** can be constructed by the singular value

decomposition of *Bj*. The convergence analysis performs that running *k* iterations of **wGKL** is equivalently running 2*k* iterations of a weighted Lanczos algorithm for **H** [16]. Actually, *Bj* can be also a lower bidiagonal matrix, and the same discussion can be taken place as in the case of *Bj* is upper bidiagonal. In the following, we only consider the upper bidiagonal case.

It is well known that the single-vector Lanczos method is widely used for searching a small number of extreme eigenvalues, and it may encounter very slow convergence when the wanted eigenvalues stay in a cluster [17]. Instead, a block Lanczos method with a suitable block size is capable of computing a cluster of eigenvalues including multiple eigenvalues very quickly. Motivated by this idea, we are going to develop a block version of **wGKL** in [16] in order to find efficiently all or some positive eigenvalues within a cluster for LREP. Based on the standard block Lanczos convergence theory in [17], the error bounds of approximation to an eigenvalue cluster, as well as their corresponding eigenspace are established to illustrate the advantage of our weighted block Golub-Kahan-Lanczos algorithm (**wbGKL**).

As the increasing size of the Krylov subspace, the storage demands, computational costs, and numerical stability of a simple version of a block Lanczos method may be affected [18]. Several kinds of efficiently restarting strategies to eliminate these effects are developed for the classic Lanczos method, such as, implicitly restart method [19], thick restart method [20]. In order to make our block method more practical, and using the special structure of LREP, we consider the thick restart strategy to our block method.

The rest of this paper is organized as follows. Section 2 gives some necessary preliminaries for our later use. In Section 3, the weighted block Golub-Kahan-Lanczos algorithm (**wbGKL**) for LREP is presented, and its convergence analysis is discussed. Section 4 proposed the thick restart weighted block Golub-Kahan-Lanczos algorithm (**wbGKL-TR**). The numerical examples are tested in Section 5 to illustrate the efficiency of our new algorithms. Finally, some conclusions are given in Section 6.

Throughout this paper, R*m*×*n* is the set of all *m* × *n* real matrices, R*n* = R*n*×1, and R = R1. *In* (or simply *I* if its dimension is clear from the context) is the *n* × *n* identity matrix, and 0*m*×*n* is an *m* × *n* matrix of zero. The superscript "*T*" denotes transpose. ·*F* denotes the Frobenius norm of a matrix, and ·2 denotes the 2-norm of a matrix or a vector. For a matrix *X* ∈ R*<sup>m</sup>*×*n*, *rank*(*X*) denotes the rank of *X*, and R(*X*) = *span*(*X*) denotes the column space of *X*; the submatrices *Xi*:*j*,: and *<sup>X</sup>*:,*k*:- of *X* composed by the intersections of row *i* to row *j* and column *k* to column -, respectively. For matrices or scalars *Xi*, *diag*(*<sup>X</sup>*1, ··· , *Xk*) denotes the block diagonal matrix with the *i*-th diagonal block *Xi*.
