RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies. (arXiv:1906.01110v1 [cs.LG])

This paper investigates the resilience and robustness of Deep Reinforcement
Learning (DRL) policies to adversarial perturbations in the state space. We
first present an approach for the disentanglement of vulnerabilities caused by
representation learning of DRL agents from those that stem from the sensitivity
of the DRL policies to distributional shifts in state transitions. Building on
this approach, we propose two RL-based techniques for quantitative benchmarking
of adversarial resilience and robustness in DRL policies against perturbations
of state transitions. We demonstrate the feasibility of our proposals through
experimental evaluation of resilience and robustness in DQN, A2C, and PPO2
policies trained in the Cartpole environment.

Source link

Related posts

Can AI Generate Programs to Help Automate Busy Work?


Importance of Using Chatbots to Improve your Business


Full ML Engineer scholarships from Udacity and the AWS DeepRacer Scholarship Challenge


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy