March 22, 2024, 4:43 a.m. | Xinyu Shi, Zecheng Hao, Zhaofei Yu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14302v1 Announce Type: cross
Abstract: The remarkable success of Vision Transformers in Artificial Neural Networks (ANNs) has led to a growing interest in incorporating the self-attention mechanism and transformer-based architecture into Spiking Neural Networks (SNNs). While existing methods propose spiking self-attention mechanisms that are compatible with SNNs, they lack reasonable scaling methods, and the overall architectures proposed by these methods suffer from a bottleneck in effectively extracting local features. To address these challenges, we propose a novel spiking self-attention mechanism …

abstract anns architecture artificial artificial neural networks arxiv attention attention mechanisms cs.cv cs.lg cs.ne networks neural networks resnet self-attention spiking neural networks success transformer transformers type vision vision transformers

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Associate Data Engineer

@ Nominet | Oxford/ Hybrid, GB

Data Science Senior Associate

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India