Feb. 26, 2024, 5:48 a.m. | Jorge Askur Vazquez Fernandez, Jae Joong Lee, Santiago Andr\'es Serrano Vacca, Alejandra Magana, Bedrich Benes, Voicu Popescu

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.15083v1 Announce Type: cross
Abstract: The paper introduces Hands-Free VR, a voice-based natural-language interface for VR. The user gives a command using their voice, the speech audio data is converted to text using a speech-to-text deep learning model that is fine-tuned for robustness to word phonetic similarity and to spoken English accents, and the text is mapped to an executable VR command using a large language model that is robust to natural language diversity. Hands-Free VR was evaluated in a …

abstract accents arxiv audio command cs.ai cs.cl cs.hc data deep learning english free language mapped natural paper robustness speech speech-to-text spoken text type voice word

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York