Aug. 16, 2022, 1:12 a.m. | Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein

cs.CL updates on arXiv.org arxiv.org

We present MetricBERT, a BERT-based model that learns to embed text under a
well-defined similarity metric while simultaneously adhering to the
``traditional'' masked-language task. We focus on downstream tasks of learning
similarities for recommendations where we show that MetricBERT outperforms
state-of-the-art alternatives, sometimes by a substantial margin. We conduct
extensive evaluations of our method and its different variants, showing that
our training objective is highly beneficial over a traditional contrastive
loss, a standard cosine similarity objective, and six other baselines. …

arxiv learning representation representation learning text training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina