Oct. 17, 2022, 1:13 a.m. | Chao-Han Huck Yang, I-Fan Chen, Andreas Stolcke, Sabato Marco Siniscalchi, Chin-Hui Lee

cs.LG updates on arXiv.org arxiv.org

Differential privacy (DP) is one data protection avenue to safeguard user
information used for training deep models by imposing noisy distortion on
privacy data. Such a noise perturbation often results in a severe performance
degradation in automatic speech recognition (ASR) in order to meet a privacy
budget $\varepsilon$. Private aggregation of teacher ensemble (PATE) utilizes
ensemble probabilities to improve ASR accuracy when dealing with the noise
effects controlled by small values of $\varepsilon$. We extend PATE learning to
work with …

aggregation arxiv ensemble experimental speech speech recognition study

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States