May 3, 2024, 4:15 a.m. | Liam Lonergan, Mengjie Qian, Neasa N\'i Chiar\'ain, Christer Gobl, Ailbhe N\'i Chasaide

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.01293v1 Announce Type: new
Abstract: This paper explores the use of Hybrid CTC/Attention encoder-decoder models trained with Intermediate CTC (InterCTC) for Irish (Gaelic) low-resource speech recognition (ASR) and dialect identification (DID). Results are compared to the current best performing models trained for ASR (TDNN-HMM) and DID (ECAPA-TDNN). An optimal InterCTC setting is initially established using a Conformer encoder. This setting is then used to train a model with an E-branchformer encoder and the performance of both architectures are compared. A …

abstract arxiv asr attention cs.ai cs.cl cs.sd current decoder eess.as encoder encoder-decoder framework hybrid identification intermediate low paper recognition results speech speech recognition type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US