Prior mixture models are capable to build complex prior densities from simple, e.g., Gaussian components. Going beyond classical quadratic regularisation approaches, they still can use the nice analytical features of Gaussians, and allow to control the degree of the resulting non-convexity explicitly. Combined with parameterised component mean functions and covariances they seem to provide a powerful tool.
Acknowledgements
The author was supported by a
Postdoctoral Fellowship (Le 1014/1-1)
from the Deutsche Forschungsgemeinschaft and a NSF/CISE Postdoctoral
Fellowship at the Massachusetts Institute of Technology.
Part of the work was done
during the
seminar `Statistical Physics of Neural Networks'
at the Max-Planck-Institut für Physik komplexer
Systeme, Dresden.
The author also wants to thank Federico Girosi, Tomaso Poggio,
Jörg Uhlig, and Achim Weiguny for discussions.