Aug. 19, 2022, 1:11 a.m. | Aleksandr Beknazaryan

cs.LG updates on arXiv.org arxiv.org

We show that $d$-variate polynomials of degree $R$ can be represented on
$[0,1]^d$ as shallow neural networks of width
$d+1+\sum_{r=2}^R\binom{r+d-1}{d-1}[\binom{r+d-1}{d-1}+1]$. Also, by SNN
representation of localized Taylor polynomials of univariate $C^\beta$-smooth
functions, we derive for shallow networks the minimax optimal rate of
convergence, up to a logarithmic factor, to unknown univariate regression
function.

arxiv ml network neural network representation

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

AI Scientist/Engineer

@ OKX | Singapore

Research Engineering/ Scientist Associate I

@ The University of Texas at Austin | AUSTIN, TX

Senior Data Engineer

@ Algolia | London, England

Fundamental Equities - Vice President, Equity Quant Research Analyst (Income & Value Investment Team)

@ BlackRock | NY7 - 50 Hudson Yards, New York

Snowflake Data Analytics

@ Devoteam | Madrid, Spain