May 26, 2022, 1:12 a.m. | Yichao Du, Zhirui Zhang, Weizhi Wang, Boxing Chen, Jun Xie, Tong Xu

cs.CL updates on arXiv.org arxiv.org

End-to-end speech-to-text translation (E2E-ST) is becoming increasingly
popular due to the potential of its less error propagation, lower latency, and
fewer parameters. Given the triplet training corpus $\langle speech,
transcription, translation\rangle$, the conventional high-quality E2E-ST system
leverages the $\langle speech, transcription\rangle$ pair to pre-train the
model and then utilizes the $\langle speech, translation\rangle$ pair to
optimize it further. However, this process only involves two-tuple data at each
stage, and this loose coupling fails to fully exploit the association between
triplet …

arxiv speech translation

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya