all AI news
Pcc-tuning: Breaking the Contrastive Learning Ceiling in Semantic Textual Similarity
June 17, 2024, 4:41 a.m. | Bowen Zhang, Chunping Li
cs.CL updates on arXiv.org arxiv.org
Abstract: Semantic Textual Similarity (STS) constitutes a critical research direction in computational linguistics and serves as a key indicator of the encoding capabilities of embedding models. Driven by advances in pre-trained language models and contrastive learning techniques, leading sentence representation methods can already achieved average Spearman's correlation scores of approximately 86 across seven STS benchmarks in SentEval. However, further improvements have become increasingly marginal, with no existing method attaining an average score higher than 87 on …
abstract advances arxiv breaking capabilities computational cs.cl embedding embedding models encoding key language language models linguistics pcc representation research semantic textual tuning type
More from arxiv.org / cs.CL updates on arXiv.org
ReFT: Reasoning with Reinforced Fine-Tuning
2 days, 10 hours ago |
arxiv.org
Exploring Defeasibility in Causal Reasoning
2 days, 10 hours ago |
arxiv.org
A Large Language Model Approach to Educational Survey Feedback Analysis
2 days, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist
@ Ford Motor Company | Chennai, Tamil Nadu, India
Systems Software Engineer, Graphics
@ Parallelz | Vancouver, British Columbia, Canada - Remote
Engineering Manager - Geo Engineering Team (F/H/X)
@ AVIV Group | Paris, France
Data Analyst
@ Microsoft | San Antonio, Texas, United States
Azure Data Engineer
@ TechVedika | Hyderabad, India
Senior Data & AI Threat Detection Researcher (Cortex)
@ Palo Alto Networks | Tel Aviv-Yafo, Israel