求解线性方程组的三种经典迭代方法
2013-04-13 04:22阅读:
对于给定的线性方程组 AX = b 来说,教科书上一般都会提供三种经典的迭代方法,简单整理一下方便使用:
(1) Jacobi
method
an algorithm for determining the solutions of a system of linear
equations with
largest absolute values in
each row and column dominated by the diagonal element. The
solution is obtained iteratively with the element-base
formula:
其中,xi(k)为第k次迭代得到的近似解 (x1
(k)
,
x2(k) ,
…,
xn(k)
)中的第i个分量,该迭代方法可以根据初始向量x(0)依次求得x(1)、x(2)...。若原方程组的系数矩阵按行严格对角占优,则x(k)
收敛于原方程组的解x。
(2) Gauss-Seidel
method
also known as the
Liebmann
method or
the method
of successive displacement, is an iterative method
used to solve a linear system of equations. It is similar to the
Jacobi method. Though it can be applied to any matrix with non-zero
elements on the diagonals, convergence is only guaranteed if the
matrix is either diagonally dominant, or symmetric and positive
definite.
高斯-塞德尔迭代的基本思想与雅可比迭代相似,但是高斯-塞德尔迭代计算xi的第k+1次迭代值时,使用xi+1, …,xn-1第k次迭代的旧值和x0,
…,xi-1第k+1次迭代的新值。The
computation of
xi(k+1)
uses only the elements of x(k+1) that have
already been computed, and only the elements of
x(k) that have not yet to be advanced to
iteration k+1. This means that, unlike the Jacobi method,
only one storage vector is required
as elements can be overwritten as they are
computed, which can be advantageous for very large problems.
However, unlike the Jacobi method, the computations for each
element cannot be done in parallel. Furthermore, the values at each
iteration are dependent on the order of the original equations.
(3)The Successive over-relaxation
method (SOR)
the method of successive over-relaxation (SOR) is
a variant of the Gauss–Seidel
method for solving a linear system of equations,
resulting in
faster
convergence. A similar method can be used for any
slowly converging iterative process.
其中ω是松弛因子. This method obviously includes
compute the solution using the
regular SOR method, then use the
linear combination of the calculated solution and
the previous iteration to update the solution.
The choice of relaxation factor
ω is not necessarily
easy, and depends upon the properties of the coefficient matrix.
ω > 1 is usually used to speedup convergence of a
slowconverging process, while
ω < 1 is used to turn the
diverging process into a converging process.
当A对称正定时,
0<ω<2,
当ω=1时,退化成高斯-塞德尔迭代,当ω>1时称为超松弛迭代法,ω<1时称为低松弛迭代法,ω选择的好将大大提高SOR收敛速度.
Thus convergence of the iteration process follows, but we are
generally interested in faster convergence rather than just
convergence.