Web: http://arxiv.org/abs/2111.07964

June 16, 2022, 1:11 a.m. | Zuowei Shen, Haizhao Yang, Shijun Zhang

cs.LG updates on arXiv.org arxiv.org

One of the arguments to explain the success of deep learning is the powerful
approximation capacity of deep neural networks. Such capacity is generally
accompanied by the explosive growth of the number of parameters, which, in
turn, leads to high computational costs. It is of great interest to ask whether
we can achieve successful deep learning with a small number of learnable
parameters adapting to the target function. From an approximation perspective,
this paper shows that the number of parameters …

approximation arxiv deep lg network terms

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY