all AI news
No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval. (arXiv:2206.02873v3 [cs.IR] UPDATED)
June 24, 2022, 1:12 a.m. | Guilherme Moraes Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Rodrigo Nogueira
cs.CL updates on arXiv.org arxiv.org
Recent work has shown that small distilled language models are strong
competitors to models that are orders of magnitude larger and slower in a wide
range of information retrieval tasks. This has made distilled and dense models,
due to latency constraints, the go-to choice for deployment in real-world
retrieval applications. In this work, we question this practice by showing that
the number of parameters and early query-document interaction play a
significant role in the generalization ability of retrieval models. Our …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior AI & Data Engineer
@ Bertelsmann | Kuala Lumpur, 14, MY, 50400
Analytics Engineer
@ Reverse Tech | Philippines - Remote