Web: http://arxiv.org/abs/2101.05612

Jan. 27, 2022, 2:11 a.m. | Shaosheng Xu, Jinde Cao, Yichao Cao, Tong Wang

cs.LG updates on arXiv.org arxiv.org

As gradient descent method in deep learning causes a series of questions,
this paper proposes a novel gradient-free deep learning structure. By adding a
new module into traditional Self-Organizing Map and introducing residual into
the map, a Deep Valued Self-Organizing Map network is constructed. And analysis
about the convergence performance of such a deep Valued Self-Organizing Map
network is proved in this paper, which gives an inequality about the designed
parameters with the dimension of inputs and the loss of …

analysis arxiv deep deep learning gradient learning

More from arxiv.org / cs.LG updates on arXiv.org

Data Analytics and Technical support Lead

@ Coupa Software, Inc. | Bogota, Colombia

Data Science Manager

@ Vectra | San Jose, CA

Data Analyst Sr

@ Capco | Brazil - Sao Paulo

Data Scientist (NLP)

@ Builder.ai | London, England, United Kingdom - Remote

Senior Data Analyst

@ BuildZoom | Scottsdale, AZ/ San Francisco, CA/ Remote

Senior Research Scientist, Speech Recognition

@ SoundHound Inc. | Toronto, Canada