Nov. 5, 2023, 6:45 a.m. | Hugo Frezat, Ronan Fablet, Guillaume Balarac, Julien Le Sommer

cs.LG updates on arXiv.org arxiv.org

In this paper, we propose a generic algorithm to train machine learning-based
subgrid parametrizations online, i.e., with $\textit{a posteriori}$ loss
functions for non-differentiable numerical solvers. The proposed approach
leverage neural emulators to train an approximation of the reduced state-space
solver, which is then used to allows gradient propagation through temporal
integration steps. The algorithm is able to recover most of the benefit of
online strategies without having to compute the gradient of the original
solver. It is demonstrated that training …

algorithm approximation arxiv differentiable dynamics free functions gradient loss machine machine learning numerical online learning paper physics propagation scale solver space state train

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote