all AI news
Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings
Feb. 28, 2024, 5:49 a.m. | Isabelle Mohr, Markus Krimmel, Saba Sturua, Mohammad Kalim Akram, Andreas Koukounas, Michael G\"unther, Georgios Mastrapas, Vinit Ravishankar, Joan Fo
cs.CL updates on arXiv.org arxiv.org
Abstract: We introduce a novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language. These models are capable of processing lengthy text inputs with up to 8192 tokens, making them highly versatile for a range of natural language processing tasks such as text retrieval, clustering, and semantic textual similarity (STS) calculations.
By focusing on bilingual models and introducing a unique multi-task learning objective, we have significantly improved the …
abstract art arxiv bilingual cs.ai cs.cl cs.ir embedding embedding models embeddings english inputs language language processing making natural natural language natural language processing novel processing state support tasks text text embedding them token tokens type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US