Nov. 3, 2022, 1:11 a.m. | Yao Feng, Yuhong Jiang, Hang Su, Dong Yan, Jun Zhu

cs.LG updates on arXiv.org arxiv.org

Model-based reinforcement learning usually suffers from a high sample
complexity in training the world model, especially for the environments with
complex dynamics. To make the training for general physical environments more
efficient, we introduce Hamiltonian canonical ordinary differential equations
into the learning process, which inspires a novel model of neural ordinary
differential auto-encoder (NODA). NODA can model the physical world by nature
and is flexible to impose Hamiltonian mechanics (e.g., the dimension of the
physical equations) which can further accelerate …

arxiv canonical network reinforcement reinforcement learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne