View on GitHub

memo

Iterative Improvement

Iterative Improvement

\[\begin{equation} Ax = b \label{original_equation} \end{equation}\]

Let $x_{0}$ be a numerical solution of the equation. Let

\[r_{0} := b - Ax_{0} .\]

We can improve the solution by solving the following equation with respect to $d$;

\[\begin{equation} Ad = r_{0} \label{equation_for_improvement} \end{equation} .\]

Improved solution is defined as

\[\hat{x}_{1} := \hat{x}_{0} + d_{0} .\]

where $d_{0}$ is the numerical solution of the equation. When we solve the original equaton \(\eqref{original_equation}\), we obtain LU decomposition of $A$. Thus the solution of the equation \(\eqref{equation_for_improvement}\) can solve $O(N^{2})$.

Remark

Unfortunately, the naive execution of the algorithm renders an $x_{1}$ that is no more accurate than $x_{0}$. The efficient situations are

Definition

\[\begin{eqnarray} \abs{A} & := & \left( \abs{a_{j}^{i}} \right)_{i,j = 1,\ldots,n} \nonumber \\ \abs{b} & := & \left( \abs{b^{i}} \right)_{i = 1,\ldots,n} \nonumber \end{eqnarray}\] \[\norm{ b }_{\infty} := \max_{i = 1,\ldots, n} \abs{b_{i}} .\] \[e := \left( \begin{array}{c} 1 \\ \vdots \\ 1 \end{array} \right) .\]

Definition componentwise backward error

\[E \in \mathbb{R}_{\ge 0}^{n \times n}, \ f \in \mathbb{R}_{\ge 0}^{n} \ y \in \mathbb{R}^{n}, \ \omega_{E, f}(y) := \min\{ \epsilon \mid (A + \Delta A) y = b + \Delta b, \ \abs{\Delta A} \le \epsilon E, \ \abs{\Delta b} \le \epsilon f \} .\]

If $E = \abs{A}$ and $f = \abs{b}$, we call it the componentwise relative backward error.

Theorem 1.1

\[\omega_{E, f}(y) = \max_{i = 1,\ldots, n} \frac{ \abs{r_{i}} }{ \left( E\abs{y} + f \right)_{i} } .\]

where $r := b - Ay$.

proof

$\Box$

Definition Condition number

\[\begin{eqnarray} \mathrm{cond}(A, x) & := & \frac{ \norm{ \abs{A^{-1}} \abs{A} \abs{x} }_{\infty} }{ \norm{x}_{\infty} } \nonumber \\ \mathrm{cond}(A) & := & \mathrm{cond}(A, e) \nonumber \\ & = & \frac{ \norm{ \abs{A^{-1}} \abs{A} \abs{e} }_{\infty} }{ \norm{e}_{\infty} } \nonumber \\ & = & \norm{ \abs{A^{-1}} \abs{A} }_{\infty} \nonumber \\ & \le & \norm{ A^{-1} }_{\infty} \norm{ A }_{\infty} \nonumber \\ & =: & \kappa_{\infty}(A) \nonumber \end{eqnarray}\]

The componentwise condition number is defined as

\[\begin{eqnarray} \mathrm{cond}_{E, f}(A, x) & := & \lim_{\epsilon \rightarrow 0} \sup \left\{ \frac{ \norm{\Delta x}_{\infty} }{ \epsilon \norm{x}_{\infty} } \mid (A + \Delta A) (x + \Delta x) = b + \Delta b, \ \norm{\Delta A} \le \epsilon E, \ \norm{\Delta b} \le \epsilon f \right\}, \end{eqnarray}\]

Assumptions

\[(A + \Delta A)x = b, \ \abs{\Delta A} \le u W.\] \[u \norm{ \abs{A^{-1}} W }_{\infty} < \frac{1}{2}\]

where $A + \Delta A$ is nonsingular.

Definition Forward error

The foward error of $\hat{x}_{i}$.

\[\frac{ \norm{x - \hat{x}_{i}}_{\infty} }{ \norm{x}_{\infty} } .\]

Heauristic analysis

Definition

\[u := \frac{1}{2} \beta^{-t} .\]

Heauristic 1

Gaussian elimination produces a solution of linear equation with a relatively small residual $r$.

Heauristic 2

If the unit roundoff and condition satisfy $u \approx 10^{d}$ and $\kappa_{\infty}(A) \approx 10^{q}$, then Gaussian elimination produces a solution $x$ that has about $d - q$ correct decimal digits.

Heauristic 3

If the machine precision $u$ and condition satisfy $u \approx 10^{-d}$ and $\kappa_{\infty}(A) \approx 10^{d}$, then after $T$ executoins of Algorithm, $x$ has approximately $\min(d, k(d - q))$ correct digits.

Reference

Golub, Gene H., and Charles F. Van Loan. Matrix computations. Vol. 3. JHU Press, 2012.