all AI news
PromptBERT: Improving BERT Sentence Embeddings with Prompts. (arXiv:2201.04337v1 [cs.CL])
Jan. 13, 2022, 2:10 a.m. | Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang
cs.CL updates on arXiv.org arxiv.org
The poor performance of the original BERT for sentence semantic similarity
has been widely discussed in previous works. We find that unsatisfactory
performance is mainly due to the static token embeddings biases and the
ineffective BERT layers, rather than the high cosine similarity of the sentence
embeddings. To this end, we propose a prompt based sentence embeddings method
which can reduce token embeddings biases and make the original BERT layers more
effective. By reformulating the sentence embeddings task as the …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Business Intelligence Developer / Analyst
@ Transamerica | Work From Home, USA
Data Analyst (All Levels)
@ Noblis | Bethesda, MD, United States