June 16, 2022, 2:30 p.m. | Synced

Synced syncedreview.com

In the new paper Toward a Realistic Model of Speech Processing in the Brain with Self-supervised Learning, researchers show that self-supervised architectures such as Wav2Vec 2.0 can learn brain-like representations from as little as 600 hours of unlabelled speech; and can also learn sound-generic and speech- and language-specific representations similar to those of the prefrontal and temporal cortices.


The post Wav2Vec 2.0 Learns Brain-Like Representations From Just 600 Hours of Unlabeled Speech Data in New Study first appeared on Synced …

ai artificial intelligence brain data deep-neural-networks machine learning machine learning & data science ml research self-supervised learning speech speech processing study technology

More from syncedreview.com / Synced

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Senior Product Manager - Real-Time Payments Risk AI & Analytics

@ Visa | London, United Kingdom

Business Analyst (AI Industry)

@ SmartDev | Cầu Giấy, Vietnam

Computer Vision Engineer

@ Sportradar | Mont-Saint-Guibert, Belgium

Data Analyst

@ Unissant | Alexandria, VA, USA

Senior Applied Scientist

@ Zillow | Remote-USA