The desire to use reinforcement learning in safety-critical settings has
inspired a recent interest in formal methods for learning algorithms. Existing
formal methods for learning and optimization primarily consider the problem of
constrained learning or constrained optimization. Given a single correct model
and associated safety constraint, these approaches guarantee efficient learning
while provably avoiding behaviors outside the safety constraint. Acting well
given an accurate environmental model is an important pre-requisite for safe
learning, but is ultimately insufficient for systems that operate in complex
heterogeneous environments. This paper introduces verification-preserving model
updates, the first approach toward obtaining formal safety guarantees for
reinforcement learning in settings where multiple environmental models must be
taken into account. Through a combination of design-time model updates and
runtime model falsification, we provide a first approach toward obtaining
formal safety proofs for autonomous systems acting in heterogeneous

Source link