Web: http://arxiv.org/abs/2102.07003

Jan. 24, 2022, 2:10 a.m. | Emmanouil Theodosis, Bahareh Tolooshams, Pranay Tankala, Abiy Tasissa, Demba Ba

cs.LG updates on arXiv.org arxiv.org

Recent approaches in the theoretical analysis of model-based deep learning
architectures have studied the convergence of gradient descent in shallow ReLU
networks that arise from generative models whose hidden layers are sparse.
Motivated by the success of architectures that impose structured forms of
sparsity, we introduce and study a group-sparse autoencoder that accounts for a
variety of generative models, and utilizes a group-sparse ReLU activation
function to force the non-zero units at a given layer to occur in blocks. For …

arxiv group

More from arxiv.org / cs.LG updates on arXiv.org

Director, Data Science (Advocacy & Nonprofit)

@ Civis Analytics | Remote

Data Engineer

@ Rappi | [CO] Bogotá

Data Scientist V, Marketplaces Personalization (Remote)

@ ID.me | United States (U.S.)

Product OPs Data Analyst (Flex/Remote)

@ Scaleway | Paris

Big Data Engineer

@ Risk Focus | Riga, Riga, Latvia

Internship Program: Machine Learning Backend

@ Nextail | Remote job