AI/ML

The Actor-Advisor: Policy Gradient With Off-Policy Advice. (arXiv:1902.02556v1 [cs.AI])



Actor-critic algorithms learn an explicit policy (actor), and an accompanying
value function (critic). The actor performs actions in the environment, while
the critic evaluates the actor’s current policy. However, despite their
stability and promising convergence properties, current actor-critic algorithms
do not outperform critic-only ones in practice. We believe that the fact that
the critic learns Q^pi, instead of the optimal Q-function Q*, prevents
state-of-the-art robust and sample-efficient off-policy learning algorithms
from being used. In this paper, we propose an elegant solution, the
Actor-Advisor architecture, in which a Policy Gradient actor learns from
unbiased Monte-Carlo returns, while being shaped (or advised) by the Softmax
policy arising from an off-policy critic. The critic can be learned
independently from the actor, using any state-of-the-art algorithm. Being
advised by a high-quality critic, the actor quickly and robustly learns the
task, while its use of the Monte-Carlo return helps overcome any bias the
critic may have. In addition to a new Actor-Critic formulation, the
Actor-Advisor, a method that allows an external advisory policy to shape a
Policy Gradient actor, can be applied to many other domains. By varying the
source of advice, we demonstrate the wide applicability of the Actor-Advisor to
three other important subfields of RL: safe RL with backup policies, efficient
leverage of domain knowledge, and transfer learning in RL. Our experimental
results demonstrate the benefits of the Actor-Advisor compared to
state-of-the-art actor-critic methods, illustrate its applicability to the
three other application scenarios listed above, and show that many important
challenges of RL can now be solved using a single elegant solution.

Source link




Related posts

Lateral Habenula Gone Awry in Depression: Bridging Cellular Adaptations With Therapeutics.

Newsemia

Electromechanical and robot-assisted arm training for improving activities of daily living, arm function, and arm muscle strength after stroke.

Newsemia

Autonomous glider can fly like an albatross, cruise like a sailboat

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy