Certified Adversarial Robustness for Deep Reinforcement Learning. (arXiv:1910.12908v1 [cs.RO])

Deep Neural Network-based systems are now the state-of-the-art in many
robotics tasks, but their application in safety-critical domains remains
dangerous without formal guarantees on network robustness. Small perturbations
to sensor inputs (from noise or adversarial examples) are often enough to
change network-based decisions, which was already shown to cause an autonomous
vehicle to swerve into oncoming traffic. In light of these dangers, numerous
algorithms have been developed as defensive mechanisms from these adversarial
inputs, some of which provide formal robustness guarantees or certificates.
This work leverages research on certified adversarial robustness to develop an
online certified defense for deep reinforcement learning algorithms. The
proposed defense computes guaranteed lower bounds on state-action values during
execution to identify and choose the optimal action under a worst-case
deviation in input space due to possible adversaries or noise. The approach is
demonstrated on a Deep Q-Network policy and is shown to increase robustness to
noise and adversaries in pedestrian collision avoidance scenarios and a classic
control task.

Source link

WordPress database error: [Error writing file '/tmp/MYG3itV4' (Errcode: 28 - No space left on device)]
SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND wp_posts.ID NOT IN (299063) AND ( wp_term_relationships.term_taxonomy_id IN (313) ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') GROUP BY wp_posts.ID ORDER BY RAND() LIMIT 0, 3

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy