A neural network computes a function. A central property of neural networks
is that they are “universal approximators:” for a given continuous function,
there exists a neural network that can approximate it arbitrarily well, given
enough neurons (and some additional assumptions). In contrast, a Bayesian
network is a model, but each of its queries can be viewed as computing a
function. In this paper, we identify some key distinctions between the
functions computed by neural networks and those by marginal Bayesian network
queries, showing that the former are more expressive than the latter. Moreover,
we propose a simple augmentation to Bayesian networks (a testing operator),
which enables their marginal queries to become “universal approximators.”

Source link