all AI news
BERT, can HE predict contrastive focus? Predicting and controlling prominence in neural TTS using a language model. (arXiv:2207.01718v1 [cs.CL])
July 6, 2022, 1:11 a.m. | Brooke Stephenson, Laurent Besacier, Laurent Girin, Thomas Hueber
cs.CL updates on arXiv.org arxiv.org
Several recent studies have tested the use of transformer language model
representations to infer prosodic features for text-to-speech synthesis (TTS).
While these studies have explored prosody in general, in this work, we look
specifically at the prediction of contrastive focus on personal pronouns. This
is a particularly challenging task as it often requires semantic, discursive
and/or pragmatic knowledge to predict correctly. We collect a corpus of
utterances containing contrastive focus and we evaluate the accuracy of a BERT
model, finetuned …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
(373) Applications Manager – Business Intelligence - BSTD
@ South African Reserve Bank | South Africa
Data Engineer Talend (confirmé/sénior) - H/F - CDI
@ Talan | Paris, France
Data Science Intern (Summer) / Stagiaire en données (été)
@ BetterSleep | Montreal, Quebec, Canada
Director - Master Data Management (REMOTE)
@ Wesco | Pittsburgh, PA, United States
Architect Systems BigData REF2649A
@ Deutsche Telekom IT Solutions | Budapest, Hungary
Data Product Coordinator
@ Nestlé | São Paulo, São Paulo, BR, 04730-000