Sept. 22, 2022, 1:12 a.m. | Dominique J. Kösters, Bryan A. Kortman, Irem Boybat, Elena Ferro, Sagar Dolas, Roberto de Austri, Johan Kwisthout, Hans Hilgenkamp, Theo Rasing,

cs.LG updates on arXiv.org arxiv.org

The massive use of artificial neural networks (ANNs), increasingly popular in
many areas of scientific computing, rapidly increases the energy consumption of
modern high-performance computing systems. An appealing and possibly more
sustainable alternative is provided by novel neuromorphic paradigms, which
directly implement ANNs in hardware. However, little is known about the actual
benefits of running ANNs on neuromorphic hardware for use cases in scientific
computing. Here we present a methodology for measuring the energy cost and
compute time for inference …

arxiv benchmarking computing energy latency neuromorphic neuromorphic computing physics

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA