Deep Bayesian Network for Visual Question Generation. (arXiv:2001.08779v1 [cs.CV])

Generating natural questions from an image is a semantic task that requires
using vision and language modalities to learn multimodal representations.
Images can have multiple visual and language cues such as places, captions, and
tags. In this paper, we propose a principled deep Bayesian learning framework
that combines these cues to produce natural questions. We observe that with the
addition of more cues and by minimizing uncertainty in the among cues, the
Bayesian network becomes more confident. We propose a Minimizing Uncertainty of
Mixture of Cues (MUMC), that minimizes uncertainty present in a mixture of cues
experts for generating probabilistic questions. This is a Bayesian framework
and the results show a remarkable similarity to natural questions as validated
by a human study. We observe that with the addition of more cues and by
minimizing uncertainty among the cues, the Bayesian framework becomes more
confident. Ablation studies of our model indicate that a subset of cues is
inferior at this task and hence the principled fusion of cues is preferred.
Further, we observe that the proposed approach substantially improves over
state-of-the-art benchmarks on the quantitative metrics (BLEU-n, METEOR, ROUGE,
and CIDEr). Here we provide project link for Deep Bayesian VQG

Source link

Related posts

Is your phone smart? Ann Bunny AI test release


Artificial Intelligence Gets Good At Creating Anime Girls – Kotaku


Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging.


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy