Web: http://arxiv.org/abs/2106.03194

Jan. 27, 2022, 2:11 a.m. | Saber Jafarpour, Alexander Davydov, Anton V. Proskurnikov, Francesco Bullo

cs.LG updates on arXiv.org arxiv.org

Implicit neural networks, a.k.a., deep equilibrium networks, are a class of
implicit-depth learning models where function evaluation is performed by
solving a fixed point equation. They generalize classic feedforward models and
are equivalent to infinite-depth weight-tied feedforward networks. While
implicit models show improved accuracy and significant reduction in memory
consumption, they can suffer from ill-posedness and convergence instability.


This paper provides a new framework, which we call Non-Euclidean Monotone
Operator Network (NEMON), to design well-posed and robust implicit neural
networks …

arxiv networks

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Analyst

@ Fanatics Inc | Remote - New York

Data Engineer - Search

@ Cytora | United Kingdom - Remote

Product Manager, Technical - Data Infrastructure and Streaming

@ Nubank | Berlin

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Principal Data Scientist

@ Zuora | Remote

Data Engineer

@ Veeva Systems | Pennsylvania - Fort Washington