AI/ML

Accelerating Reinforcement Learning with a Directional-Gaussian-Smoothing Evolution Strategy. (arXiv:2002.09077v1 [cs.LG])



Evolution strategy (ES) has been shown great promise in many challenging
reinforcement learning (RL) tasks, rivaling other state-of-the-art deep RL
methods. Yet, there are two limitations in the current ES practice that may
hinder its otherwise further capabilities. First, most current methods rely on
Monte Carlo type gradient estimators to suggest search direction, where the
policy parameter is, in general, randomly sampled. Due to the low accuracy of
such estimators, the RL training may suffer from slow convergence and require
more iterations to reach optimal solution. Secondly, the landscape of reward
functions can be deceptive and contains many local maxima, causing ES
algorithms to prematurely converge and be unable to explore other parts of the
parameter space with potentially greater rewards. In this work, we employ a
Directional Gaussian Smoothing Evolutionary Strategy (DGS-ES) to accelerate RL
training, which is well-suited to address these two challenges with its ability
to i) provide gradient estimates with high accuracy, and ii) find nonlocal
search direction which lays stress on large-scale variation of the reward
function and disregards local fluctuation. Through several benchmark RL tasks
demonstrated herein, we show that DGS-ES is highly scalable, possesses superior
wall-clock time, and achieves competitive reward scores to other popular policy
gradient and ES approaches.

Source link





Related posts

Bridging Adversarial Robustness and Gradient Interpretability. (arXiv:1903.11626v1 [cs.LG])

Newsemia

How Penn Medicine looks at telehealth from the patient perspective

Newsemia

Mission Critical Artificial Intelligence for Real-Time Fraud Prevention

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy