April 15, 2022, 1:11 a.m. | Dan Li, Yang Yang, Hongyin Tang, Jingang Wang, Tong Xu, Wei Wu, Enhong Chen

cs.CL updates on arXiv.org arxiv.org

With the booming of pre-trained transformers, representation-based models
based on Siamese transformer encoders have become mainstream techniques for
efficient text matching. However, these models suffer from severe performance
degradation due to the lack of interaction between the text pair, compared with
interaction-based models. Prior arts attempt to address this through performing
extra interaction for Siamese encoded representations, while the interaction
during encoding is still ignored. To remedy this, we propose a \textit{Virtual}
InteRacTion mechanism (VIRT) to transfer interactive knowledge from …

arxiv representation text virtual

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Quality, Senior Analyst

@ Toyota North America | Plano

Data Analyst & Audit Management Software (AMS) Coordinator

@ World Vision | Philippines - Home Working

Product Manager Power BI Platform Tech I&E Operational Insights

@ ING | HBP (Amsterdam - Haarlerbergpark)

Sr. Director, Software Engineering, Clinical Data Strategy

@ Moderna | USA-Washington-Seattle-1099 Stewart Street

Data Engineer (Data as a Service)

@ Xplor | Atlanta, GA, United States