Nov. 23, 2022, 2:12 a.m. | Danimir T. Doncevic, Alexander Mitsos, Yue Guo, Qianxiao Li, Felix Dietrich, Manuel Dahmen, Ioannis G. Kevrekidis

cs.LG updates on arXiv.org arxiv.org

Meta-learning of numerical algorithms for a given task consist of the
data-driven identification and adaptation of an algorithmic structure and the
associated hyperparameters. To limit the complexity of the meta-learning
problem, neural architectures with a certain inductive bias towards favorable
algorithmic structures can, and should, be used. We generalize our previously
introduced Runge-Kutta neural network to a recursively recurrent neural network
(R2N2) superstructure for the design of customized iterative algorithms. In
contrast to off-the-shelf deep learning approaches, it features a …

algorithms architecture arxiv iterative network neural network recurrent neural network

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne