all AI news
Weight Sparsity Complements Activity Sparsity in Neuromorphic Language Models
May 2, 2024, 4:42 a.m. | Rishav Mukherji, Mark Sch\"one, Khaleelulla Khan Nazeer, Christian Mayr, David Kappel, Anand Subramoney
cs.LG updates on arXiv.org arxiv.org
Abstract: Activity and parameter sparsity are two standard methods of making neural networks computationally more efficient. Event-based architectures such as spiking neural networks (SNNs) naturally exhibit activity sparsity, and many methods exist to sparsify their connectivity by pruning weights. While the effect of weight pruning on feed-forward SNNs has been previously studied for computer vision tasks, the effects of pruning for complex sequence tasks like language modeling are less well studied since SNNs have traditionally struggled …
abstract architectures arxiv connectivity cs.ai cs.lg cs.ne event language language models making networks neural networks neuromorphic pruning sparsity spiking neural networks standard type while
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US