Abstract

Probabilistic Programming is a general-purpose means of expressing probabilistic models as programs, and automatically performing Bayesian inference. The vision of expressing probabilistic models as programs is elegant and unifying; and the construction of general-purpose Bayesian inference engines is attractive, as it can greatly improve access to machine learning algorithms. The reality is however somewhat different: existing inference algorithms have few guarantees on their results, and / or they only work on a restricted class of programs (models). We propose guaranteed (stochastic and sound) bounds on the posterior distributions. Mathematically they are super/sub-additive measures which can be shown to sandwich the true posterior (of the program / model in question) with probability 1. They can be viewed as a (partial) correctness specification, and play a practically important role as diagnostics for the development of inference algorithms. Moreover they work for a very broad class of probabilistic programs, and constitute the basis for a new general-purpose inference algorithm. We have built a tool implementation, called GuBPI, which automatically computes these posterior lower/upper bounds. Our evaluation on examples from the literature shows that the bounds are useful, and can be used to recognise wrong outputs from stochastic posterior inference procedures.


Last modified: Mon Sep 26 16:03:45 CEST 2022