all AI news
Adversarial Privacy Protection on Speech Enhancement. (arXiv:2206.08170v1 [cs.SD])
Web: http://arxiv.org/abs/2206.08170
June 17, 2022, 1:11 a.m. | Mingyu Dong, Diqun Yan, Rangding Wang
cs.LG updates on arXiv.org arxiv.org
Speech is easily leaked imperceptibly, such as being recorded by mobile
phones in different situations. Private content in speech may be maliciously
extracted through speech enhancement technology. Speech enhancement technology
has developed rapidly along with deep neural networks (DNNs), but adversarial
examples can cause DNNs to fail. In this work, we propose an adversarial method
to degrade speech enhancement systems. Experimental results show that generated
adversarial examples can erase most content information in original examples or
replace it with target …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY