all AI news
Speech Understanding on Tiny Devices with A Learning Cache
May 9, 2024, 4:42 a.m. | Afsara Benazir (University of Virginia), Zhiming Xu (University of Virginia), Felix Xiaozhu Lin (University of Virginia)
cs.LG updates on arXiv.org arxiv.org
Abstract: This paper addresses spoken language understanding (SLU) on microcontroller-like embedded devices, integrating on-device execution with cloud offloading in a novel fashion. We leverage temporal locality in the speech inputs to a device and reuse recent SLU inferences accordingly. Our idea is simple: let the device match incoming inputs against cached results, and only offload inputs not matched to any cached ones to the cloud for full inference. Realization of this idea, however, is non-trivial: the …
abstract arxiv cache cloud cs.lg devices eess.as embedded embedded devices fashion inferences inputs language language understanding match microcontroller novel paper simple slu speech speech understanding spoken spoken language understanding temporal type understanding
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US