On the Relative Expressiveness of Bayesian and Neural Networks. (arXiv:1812.08957v1 [cs.AI])

A neural network computes a function. A central property of neural networks
is that they are “universal approximators:” for a given continuous function,
there exists a neural network that can approximate it arbitrarily well, given
enough neurons (and some additional assumptions). In contrast, a Bayesian
network is a model, but each of its queries can be viewed as computing a
function. In this paper, we identify some key distinctions between the
functions computed by neural networks and those by marginal Bayesian network
queries, showing that the former are more expressive than the latter. Moreover,
we propose a simple augmentation to Bayesian networks (a testing operator),
which enables their marginal queries to become “universal approximators.”

Source link

Related posts

Tsallis Reinforcement Learning: A Unified Framework for Maximum Entropy Reinforcement Learning. (arXiv:1902.00137v2 [cs.LG] UPDATED)


Bill Gates Is Once Again the Richest Person in the World


Technique helps robots find the front door


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy