AI/ML

Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?. (arXiv:1909.10618v1 [cs.LG])

Hierarchical reinforcement learning has demonstrated significant success at
solving difficult reinforcement learning (RL) tasks. Previous works have
motivated the use of hierarchy by appealing to a number of intuitive benefits,
including learning over temporally extended transitions, exploring over
temporally extended periods, and training and exploring in a more semantically
meaningful action space, among others. However, in fully observed, Markovian
settings, it is not immediately clear why hierarchical RL should provide
benefits over standard “shallow” RL architectures. In this work, we isolate and
evaluate the claimed benefits of hierarchical RL on a suite of tasks
encompassing locomotion, navigation, and manipulation. Surprisingly, we find
that most of the observed benefits of hierarchy can be attributed to improved
exploration, as opposed to easier policy learning or imposed hierarchical
structures. Given this insight, we present exploration techniques inspired by
hierarchy that achieve performance competitive with hierarchical RL while at
the same time being much simpler to use and implement.

Source link




Related posts

Automated segmentation of the Hypothalamus and associated subunits in brain MRI.

Newsemia

Causal Network Modeling of the Determinants of Drinking Behavior in Comorbid Alcohol Use and Anxiety Disorder.

Newsemia

Robo-advisors and Artificial Intelligence – Comparing 5 Current Apps

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy