Nov. 9, 2023, 3:22 a.m. | Adnan Hassan

MarkTechPost www.marktechpost.com

Hugging Face researchers have tackled the issue of deploying large pre-trained speech recognition models in resource-constrained environments. They accomplished this by creating a substantial open-source dataset through pseudo-labelling. The dataset was then utilised to distil a smaller version of the Whisper model, called Distil-Whisper. The Whisper speech recognition transformer model was pre-trained on 680,000 hours […]


The post Hugging Face Researchers Introduce Distil-Whisper: A Compact Speech Recognition Model Bridging the Gap in High-Performance, Low-Resource Environments appeared first on MarkTechPost.

ai shorts applications artificial intelligence dataset editors pick environments face gap hugging face issue labelling language model large language model low machine learning performance recognition researchers speech speech recognition speech recognition models staff tech news technology through whisper

More from www.marktechpost.com / MarkTechPost

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA