Standard Article

Methods for Acceleration of Convergence (Extrapolation) of Vector Sequences

  1. Avram Sidi

Published Online: 16 MAR 2009

DOI: 10.1002/9780470050118.ecse234

Wiley Encyclopedia of Computer Science and Engineering

Wiley Encyclopedia of Computer Science and Engineering

How to Cite

Sidi, A. 2009. Methods for Acceleration of Convergence (Extrapolation) of Vector Sequences. Wiley Encyclopedia of Computer Science and Engineering. 1828–1846.

Author Information

  1. Technion—Israel Institute of Technology, Haifa, Israel

Publication History

  1. Published Online: 16 MAR 2009


An important problem that develops in different areas of science and engineering is that of computing limits of sequences of vectors inline image, where inline image and N is very large. Such sequences develop, for example, in the solution of systems of linear or nonlinear equations by fixed-point iterative methods, and inline image are simply the required solutions. In most cases, however, these sequences converge to their limits extremely slowly. One practical way to make the sequences inline image converge more quickly is to apply to them vector extrapolation methods. In this article, we present a somewhat detailed review of two polynomial-type vector extrapolation methods that have proved to be very efficient convergence accelerators; namely, the minimal polynomial extrapolation (MPE) and the reduced rank extrapolation (RRE). We discuss the derivation of these methods, describe the most accurate and stable algorithms for their implementation along with the effective modes of usage in solving systems of equations, nonlinear as well as linear, and present their convergence and stability theory. We also discuss their close connection with known Krylov subspace methods for linear systems. In addition to being used in solving large sparse systems of equations, MPE and RRE can be applied in other problems: We review the use of MPE in deriving vector-valued rational approximations from vector-valued power series and their connection with Krylov subspace methods for eigenvalue problems. We also show that MPE and RRE can be used very effectively to obtain the dominant eigenvectors of large sparse matrices when the corresponding eigenvalues are known, and provide the relevant theory as well. One such problem that has been of interest recently is that of computing the PageRank of the Google matrix. For completeness, we also discuss briefly the scalar, vector, and topological epsilon algorithms.


  • acceleration of convergence;
  • extrapolation;
  • vector sequences;
  • minimal polynomial extrapolation (MPE);
  • reduced rank extrapolation (RRE);
  • epsilon algorithms;
  • Krylov subspace methods;
  • linear equations;
  • nonlinear equations;
  • fixed-point iterations;
  • eigenvalue problems