next up previous contents
Next: MART (multiplicative algebraic reconstruction Up: Iterative methods Previous: ART (algebraic reconstruction technique)

EM (expectation maximation)

The EM algorithm for solving the linear system Af = g reads

  equation1992

Multiplications and divisions in this formula are understood component wise. It is derived from a statistical model of image reconstruction, see Shepp and Vardi (1982). The purpose is to compute a minimizer of the tex2html_wrap_inline4226 likelihood function

  equation1999

The convergence of (4.4) to a minimizer of (4.5) follows from the general EM theory in Dempster et al. (1977). Not that (4.4) preserves positivity if tex2html_wrap_inline4078 .

The pure EM method (4.4) produces unpleasant images which contain too much detail. Several remedies have been suggested. An obvious one is to stop the iteration early, typically after 12 steps. One also can perform a smoothing step after each iteration (EMS-algorithm of Silverman et al. (1990)). A theoretically more satisfying method is to add a penalty term - B(f) to (4.5), i.e. to minimize

  equation2008

-B(f) may be interpreted either in a Bayesian setting or simply as smoothing term. Typically,

displaymath4234

with a positive definite matrix B and a reference picture tex2html_wrap_inline4238 . Unfortunately, minimizing (4.6) is much more difficult than minimizing (4.5) and cannot be done with a simple iteration such as (4.4). For partial solutions to this problem see Levitan and Herman (1987), Green (1990), Setzepfandt (1992).

As in ART, a judicious arrangement of the equations can speed up the convergence significantly. The directions have to be arranged in ``ordered subsets'', see Hudson et al. (1992).



Frank Wuebbeling
Thu Sep 10 10:51:17 MET DST 1998