Title
Investigating Approaches to Model Reconciliation in PGMs for Explainable Decision Support
Abstract
In human-aware-AI, a standard method to include a human's expectation into the reasoning of an agent is model reconciliation where an unexpected plan by an agent is explained to the human by exposing differences in the underlying models between agent and human on which plans were calculated. When transferriing this idea to the setting of probabilistic inference in Bayesian network, we need a method for comparing two Bayesian networks to be able to expose differences between them.
Therefore, the aim of this work is to analyse different methods of comparing two Bayesian networks B1 and B2. The defining feature of these two networks is that B2 is based on B1 and was generated by using, for example, structural EM on B1, or feeding B1 with more data to get a network B2 that is different enough to B1, to even analyse.
Requirements
Basic understanding of probabilistic modelling, graph theory might help
Person working on it
Bastian Silder
Category
Bachelor thesis