This paper investigates the resilience and robustness of Deep Reinforcement
Learning (DRL) policies to adversarial perturbations in the state space. We
first present an approach for the disentanglement of vulnerabilities caused by
representation learning of DRL agents from those that stem from the sensitivity
of the DRL policies to distributional shifts in state transitions. Building on
this approach, we propose two RL-based techniques for quantitative benchmarking
of adversarial resilience and robustness in DRL policies against perturbations
of state transitions. We demonstrate the feasibility of our proposals through
experimental evaluation of resilience and robustness in DQN, A2C, and PPO2
policies trained in the Cartpole environment.

Source link