Web: http://arxiv.org/abs/2110.05036

Jan. 28, 2022, 2:11 a.m. | Rui Wang, Junyi Ao, Long Zhou, Shujie Liu, Zhihua Wei, Tom Ko, Qing Li, Yu Zhang

cs.LG updates on arXiv.org arxiv.org

Initially developed for natural language processing (NLP), Transformer model
is now widely used for speech processing tasks such as speaker recognition, due
to its powerful sequence modeling capabilities. However, conventional
self-attention mechanisms are originally designed for modeling textual sequence
without considering the characteristics of speech and speaker modeling.
Besides, different Transformer variants for speaker recognition have not been
well studied. In this work, we propose a novel multi-view self-attention
mechanism and present an empirical study of different Transformer variants with …

arxiv attention self-attention transformer

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Analyst

@ Fanatics Inc | Remote - New York

Data Engineer - Search

@ Cytora | United Kingdom - Remote

Product Manager, Technical - Data Infrastructure and Streaming

@ Nubank | Berlin

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Principal Data Scientist

@ Zuora | Remote

Data Engineer

@ Veeva Systems | Pennsylvania - Fort Washington