all AI news
Wasserstein Auto-encoded MDPs: Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees. (arXiv:2303.12558v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Although deep reinforcement learning (DRL) has many success stories, the
large-scale deployment of policies learned through these advanced techniques in
safety-critical scenarios is hindered by their lack of formal guarantees.
Variational Markov Decision Processes (VAE-MDPs) are discrete latent space
models that provide a reliable framework for distilling formally verifiable
controllers from any RL policy. While the related guarantees address relevant
practical aspects such as the satisfaction of performance and safety
properties, the VAE approach suffers from several learning flaws (posterior …
advanced arxiv decision deployment flaws framework markov performance policy posterior practical processes reinforcement reinforcement learning safety safety-critical scale space speed stories success success stories through vae verification