Web: http://arxiv.org/abs/2201.11653

Jan. 28, 2022, 2:11 a.m. | Jinhyun Park

cs.LG updates on arXiv.org arxiv.org

From the point of view of the human brain, continual learning can perform
various tasks without mutual interference. An effective way to reduce mutual
interference can be found in sparsity and selectivity of neurons. According to
Aljundi et al. and Hadsell et al., imposing sparsity at the representational
level is advantageous for continual learning because sparse neuronal
activations encourage less overlap between parameters, resulting in less
interference. Similarly, highly selective neural networks are likely to induce
less interference since particular …

arxiv learning network neural neural network representation sparsity

More from arxiv.org / cs.LG updates on arXiv.org

Director, Data Science (Advocacy & Nonprofit)

@ Civis Analytics | Remote

Data Engineer

@ Rappi | [CO] Bogotá

Data Scientist V, Marketplaces Personalization (Remote)

@ ID.me | United States (U.S.)

Product OPs Data Analyst (Flex/Remote)

@ Scaleway | Paris

Big Data Engineer

@ Risk Focus | Riga, Riga, Latvia

Internship Program: Machine Learning Backend

@ Nextail | Remote job