June 5, 2023, 9:22 a.m. | /u/d_q_n

Machine Learning www.reddit.com

[https://github.com/VinAIResearch/XPhoneBERT](https://github.com/VinAIResearch/XPhoneBERT)

XPhoneBERT is the first multilingual model pre-trained to learn phoneme representations for the downstream text-to-speech (TTS) task. Our XPhoneBERT has the same model architecture as BERT-base, trained using the RoBERTa pre-training approach on 330M phoneme-level sentences from nearly 100 languages and locales. Employing XPhoneBERT as an input phoneme encoder significantly boosts the performance of a strong neural TTS model in terms of naturalness and prosody and also helps produce fairly high-quality speech with limited training data.

XPhoneBERT can be …

architecture bert encoder languages learn machinelearning multilingual performance pre-training roberta speech text text-to-speech training tts

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States