Jan. 31, 2024, 4:41 p.m. | Yifan Peng, Jinchuan Tian, William Chen, Siddhant Arora, Brian Yan, Yui Sudo, Muhammad Shakeel, Kwanghee Choi, Jiatong Shi, Xuankai Chang, Jee-weon Ju

cs.CL updates on arXiv.org arxiv.org

Recent studies have advocated for fully open foundation models to promote
transparency and open science. As an initial step, the Open Whisper-style
Speech Model (OWSM) reproduced OpenAI's Whisper using publicly available data
and open-source toolkits. With the aim of reproducing Whisper, the previous
OWSM v1 through v3 models were still based on Transformer, which might lead to
inferior performance compared to other state-of-the-art speech encoders. In
this work, we aim to improve the performance and efficiency of OWSM without
extra …

aim arxiv cs.cl data faster foundation openai open science promote science speech studies style through transparency whisper

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada