all AI news
Exploring the Joint Use of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding. (arXiv:2211.08161v1 [eess.AS])
Nov. 16, 2022, 2:12 a.m. | Umberto Cappellazzo, Daniele Falavigna, Alessio Brutti
cs.LG updates on arXiv.org arxiv.org
Continual learning refers to a dynamical framework in which a model or agent
receives a stream of non-stationary data over time and must adapt to new data
while preserving previously acquired knowledge. Unfortunately, deep neural
networks fail to meet these two desiderata, incurring the so-called
catastrophic forgetting phenomenon. Whereas a vast array of strategies have
been proposed to attenuate forgetting in the computer vision domain, for
speech-related tasks, on the other hand, there is a dearth of works. In this …
arxiv continual distillation knowledge language spoken language understanding understanding
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
2 days, 1 hour ago |
arxiv.org
Calorimeter shower superresolution
2 days, 1 hour ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US