von Neumann stability analysis

Consider the following notation:

$\displaystyle u^{j+1}=\mathcal{T}[u^j].$ (1.21)

Here $ \mathcal{T}$ is a nonlinear operator, depending on numerical scheme in question. The successive application of $ \mathcal{T}$ results in a consequence of values

$\displaystyle u^{(0)},\,u^{(1)},\,u^{(2)},\ldots,
$

that approximate the exact solution of the problem. As was mentioned above, at each time step we add a small error $ \varepsilon^{(j)}$ , i.e.,

$\displaystyle u^{(0)}+\varepsilon^{(0)},\,u^{(1)}+\varepsilon^{(1)},\,u^{(2)}+\varepsilon^{(2)},\ldots,
$

where $ \varepsilon^{(j)}$ is a cumulative rounding error at time $ t_j$ . Thus we obtain

$\displaystyle u^{(j+1)}+\varepsilon^{(j+1)}=\mathcal{T}(u^{(j)}+\varepsilon^{(j)})$ (1.22)

After linearization of the last equation (we suppose that Taylor expansion for $ \mathcal{T}$ is possible) the linear equation for the pertrubation takes the form:

$\displaystyle \boxed{\varepsilon^{(j+1)}=\frac{\partial \mathcal{T}(u^{(j)})}{\partial u^{(j)}}\varepsilon^{(j)}:=G\varepsilon^{(j)}}$ (1.23)

This equation is called error propagation law, whereas the linearization matrix $ G$ is said to be an amplification matrix. The stability of the numerical scheme depends now on the eigenvalues $ g_{\mu}$ of $ G$ . In other words, the scheme is stable if and only if

$\displaystyle \vert g_{\mu}\vert\leq 1\quad \forall \mu
$

The question now is how this information can be used in practice. The first point to emphasize is that in general one deals with the $ u(x_i,t_j):=u_i^(j)$ , so one can write

$\displaystyle \varepsilon_i^{(j+1)}=\sum_{i'} G_{ii'}\varepsilon_{i'}^{(j)},$ (1.24)

where

$\displaystyle G_{ii'}=\frac{\partial \mathcal{T}(u^{(j)})_i}{\partial u_{i'}^{(j)}}.
$

For the values $ \varepsilon_i^{(j)}$ (rounding error at the time step $ t_j$ in the point $ x_i$ ) one can display as a Fourier series:

$\displaystyle \varepsilon_i^{(j)}=\sum_{k}e^{Ikx_j}\tilde{\varepsilon}^{(j)}(k),$ (1.25)

where $ I$ depicts the imagimary unit whereas $ \tilde{\varepsilon}^{(j)}(k)$ are the Fourier coefficients. An important point is, that the functions $ e^{Ikx_j}$ are eigenfunctions of the matrix $ G$ , so the last expansion can be interpreted as the expansion in eigenfunctions of $ G$ . Thus, for the practical point of view one take the error $ \varepsilon_i^{(j)}$ just exact as

$\displaystyle \varepsilon_i^{(j)}=e^{Ikx_j}.
$

The substitution of this expression into the Eq. (1.24) results in the following relation

$\displaystyle \varepsilon_i^{(j+1)}=g(k)e^{Ikx_j}=g(k)\varepsilon_i^{(j)}.$ (1.26)

Thus $ e^{Ikx_j}$ is an eigenvector corresponding to the eigenvalue $ g(k)$ . The value $ g(k)$ is often called an amplification factor. Finally, the stability criterium is given as

$\displaystyle \boxed{\vert g(k)\vert\leq 1\quad \forall k}$ (1.27)

This criterium is called von Neumann stablity criterium.

Gurevich_Svetlana 2008-11-12