March 11, 2024, 4:47 a.m. | Sho Hoshino, Akihiko Kato, Soichiro Murakami, Peinan Zhang

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.05257v1 Announce Type: new
Abstract: Learning better sentence embeddings leads to improved performance for natural language understanding tasks including semantic textual similarity (STS) and natural language inference (NLI). As prior studies leverage large-scale labeled NLI datasets for fine-tuning masked language models to yield sentence embeddings, task performance for languages other than English is often left behind. In this study, we directly compared two data augmentation techniques as potential solutions for monolingual STS: (a) cross-lingual transfer that exploits English resources alone …

abstract arxiv augmentation cross-lingual cs.cl data datasets embeddings fine-tuning inference language language models language understanding leads machine machine translation natural natural language performance prior scale semantic studies tasks textual transfer translation type understanding

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States