Jan. 27, 2022, 2:10 a.m. | Kazuma Suetake, Shin-ichi Ikegawa, Ryuji Saiin, Yoshihide Sawada

cs.LG updates on arXiv.org arxiv.org

As the scales of neural networks increase, techniques that enable them to run
with low computational cost and energy efficiency are required. From such
demands, various efficient neural network paradigms, such as spiking neural
networks (SNNs) or binary neural networks (BNNs), have been proposed. However,
they have sticky drawbacks, such as degraded inference accuracy and latency. To
solve these problems, we propose a single-step neural network (S$^2$NN), an
energy-efficient neural network with low computational cost and high precision.
The proposed …

arxiv energy networks neural networks time training

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)

@ Nanyang Technological University | NTU Main Campus, Singapore

Associate Director of Data Science and Analytics

@ Penn State University | Penn State University Park

Student Worker- Data Scientist

@ TransUnion | Israel - Tel Aviv

Vice President - Customer Segment Analytics Data Science Lead

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Middle/Senior Data Engineer

@ Devexperts | Sofia, Bulgaria