March 22, 2024, 4:42 a.m. | Daniel Mayfrank, Na Young Ahn, Alexander Mitsos, Manuel Dahmen

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14425v1 Announce Type: new
Abstract: We present a method for end-to-end learning of Koopman surrogate models for optimal performance in control. In contrast to previous contributions that employ standard reinforcement learning (RL) algorithms, we use a training algorithm that exploits the potential differentiability of environments based on mechanistic simulation models. We evaluate the performance of our method by comparing it to that of other controller type and training algorithm combinations on a literature known eNMPC case study. Our method exhibits …

abstract algorithm algorithms arxiv contrast control cs.lg data data-driven differentiable environments exploits math.oc optimization performance reinforcement reinforcement learning simulation standard training type via

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)