Nov. 11, 2022, 2:11 a.m. | Meng Chen, Li Lu, Jiadi Yu, Yingying Chen, Zhongjie Ba, Feng Lin, Kui Ren

cs.LG updates on arXiv.org arxiv.org

Faced with the threat of identity leakage during voice data publishing, users
are engaged in a privacy-utility dilemma when enjoying convenient voice
services. Existing studies employ direct modification or text-based
re-synthesis to de-identify users' voices, but resulting in inconsistent
audibility in the presence of human participants. In this paper, we propose a
voice de-identification system, which uses adversarial examples to balance the
privacy and utility of voice services. Instead of typical additive examples
inducing perceivable distortions, we design a novel …

arxiv de-identification examples identification privacy voice

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote