May 2, 2022, 1:11 a.m. | Berkin Akin, Suyog Gupta, Yun Long, Anton Spiridonov, Zhuo Wang, Marie White, Hao Xu, Ping Zhou, Yanqi Zhou

cs.LG updates on arXiv.org arxiv.org

On-device ML accelerators are becoming a standard in modern mobile
system-on-chips (SoC). Neural architecture search (NAS) comes to the rescue for
efficiently utilizing the high compute throughput offered by these
accelerators. However, existing NAS frameworks have several practical
limitations in scaling to multiple tasks and different target platforms. In
this work, we provide a two-pronged approach to this challenge: (i) a
NAS-enabling infrastructure that decouples model cost evaluation, search space
design, and the NAS algorithm to rapidly target various on-device …

arxiv edge ml neural architectures tpus

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Engineer, Deep Learning

@ Outrider | Remote

Data Analyst (Bangkok based, relocation provided)

@ Agoda | Bangkok (Central World Office)

Data Scientist II

@ MoEngage | Bengaluru

Machine Learning Engineer

@ Sika AG | Welwyn Garden City, United Kingdom