Feb. 13, 2024, 5:48 a.m. | Hsuan-Fu Wang Yi-Jen Shih Heng-Jui Chang Layne Berry Puyuan Peng Hung-yi Lee Hsin-Min Wang Dav

cs.CL updates on arXiv.org arxiv.org

The recently proposed visually grounded speech model SpeechCLIP is an innovative framework that bridges speech and text through images via CLIP without relying on text transcription. On this basis, this paper introduces two extensions to SpeechCLIP. First, we apply the Continuous Integrate-and-Fire (CIF) module to replace a fixed number of CLS tokens in the cascaded architecture. Second, we propose a new hybrid architecture that merges the cascaded and parallel architectures of SpeechCLIP into a multi-task learning framework. Our experimental evaluation …

apply clip continuous cs.cl cs.sd data eess.as extensions fire framework image image data images paper representation representation learning speech text text transcription through transcription via

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne