A Perceived Environment Design using a Multi-Modal Variational Autoencoder for learning Active-Sensing. (arXiv:1911.00584v1 [cs.RO])

This contribution comprises the interplay between a multi-modal variational
autoencoder and an environment to a perceived environment, on which an agent
can act. Furthermore, we conclude our work with a comparison to
curiosity-driven learning.

Source link

Related posts

Slideshow: Status Update—Where are we in AHIMA’s transformation?


Telemedicine faces challenges progressing from diagnosis to treatment


Data augmentation in dermatology image recognition using machine learning.


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy