Oct. 31, 2022, 1:12 a.m. | Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna

cs.LG updates on arXiv.org arxiv.org

To understand the training dynamics of neural networks (NNs), prior studies
have considered the infinite-width mean-field (MF) limit of two-layer NN,
establishing theoretical guarantees of its convergence under gradient flow
training as well as its approximation and generalization capabilities. In this
work, we study the infinite-width limit of a type of three-layer NN model whose
first layer is random and fixed. To define the limiting model rigorously, we
generalize the MF theory of two-layer NNs by treating the neurons as …

arxiv mean networks neural networks space theory

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote