Feb. 13, 2024, 5:45 a.m. | Boyan Li Luziwei Leng Ran Cheng Shuaijie Shen Kaixuan Zhang Jianguo Zhang Jianxing Liao

cs.LG updates on arXiv.org arxiv.org

Advancements in adapting deep convolution architectures for Spiking Neural Networks (SNNs) have significantly enhanced image classification performance and reduced computational burdens. However, the inability of Multiplication-Free Inference (MFI) to harmonize with attention and transformer mechanisms, which are critical to superior performance on high-resolution vision tasks, imposes limitations on these gains. To address this, our research explores a new pathway, drawing inspiration from the progress made in Multi-Layer Perceptrons (MLPs). We propose an innovative spiking MLP architecture that uses batch normalization …

architectures attention classification computational convolution cs.lg cs.ne free image inference layer limitations networks neural networks performance spiking neural networks tasks transformer vision

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote