all AI news
Training Strategies for Improved Lip-reading. (arXiv:2209.01383v3 [cs.CV] UPDATED)
Sept. 30, 2022, 1:16 a.m. | Pingchuan Ma, Yujiang Wang, Stavros Petridis, Jie Shen, Maja Pantic
cs.CV updates on arXiv.org arxiv.org
Several training strategies and temporal models have been recently proposed
for isolated word lip-reading in a series of independent works. However, the
potential of combining the best strategies and investigating the impact of each
of them has not been explored. In this paper, we systematically investigate the
performance of state-of-the-art data augmentation approaches, temporal models
and other training strategies, like self-distillation and using word boundary
indicators. Our results show that Time Masking (TM) is the most important
augmentation followed by …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Technology Consultant Master Data Management (w/m/d)
@ SAP | Walldorf, DE, 69190
Research Engineer, Computer Vision, Google Research
@ Google | Nairobi, Kenya