Web: http://arxiv.org/abs/2201.10879

Jan. 27, 2022, 2:10 a.m. | Kazuma Suetake, Shin-ichi Ikegawa, Ryuji Saiin, Yoshihide Sawada

cs.LG updates on arXiv.org arxiv.org

As the scales of neural networks increase, techniques that enable them to run
with low computational cost and energy efficiency are required. From such
demands, various efficient neural network paradigms, such as spiking neural
networks (SNNs) or binary neural networks (BNNs), have been proposed. However,
they have sticky drawbacks, such as degraded inference accuracy and latency. To
solve these problems, we propose a single-step neural network (S$^2$NN), an
energy-efficient neural network with low computational cost and high precision.
The proposed …

arxiv energy networks neural neural networks time training

More from arxiv.org / cs.LG updates on arXiv.org

Data Scientist

@ Fluent, LLC | Boca Raton, Florida, United States

Big Data ETL Engineer

@ Binance.US | Vancouver

Data Scientist / Data Engineer

@ Kin + Carta | Chicago

Data Engineer

@ Craft | Warsaw, Masovian Voivodeship, Poland

Senior Manager, Data Analytics Audit

@ Affirm | Remote US

Data Scientist - Nationwide Opportunities, AWS Professional Services

@ Amazon.com | US, NC, Virtual Location - N Carolina