Nov. 1, 2022, 1:11 a.m. | Shikhar Jaiswal, Rahul Kiran Kranti Goli, Aayan Kumar, Vivek Seshadri, Rahul Sharma

cs.LG updates on arXiv.org arxiv.org

Running machine learning inference on tiny devices, known as TinyML, is an
emerging research area. This task requires generating inference code that uses
memory frugally, a task that standard ML frameworks are ill-suited for. A
deployment framework for TinyML must be a) parametric in the number
representation to take advantage of the emerging representations like posits,
b) carefully assign high-precision to a few tensors so that most tensors can be
kept in low-precision while still maintaining model accuracy, and c) …

arxiv inference microcontrollers

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada