all AI news
Sentence-transformer Bert model performs worse after fine-tuning
Jan. 18, 2022, 1:51 p.m. | /u/MinuteLavishness
Natural Language Processing www.reddit.com
I'm using symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli from HuggingFace. After multiple tries with different batch sizes, epochs, learning rates and even different unsupervised learning models methods such as this, I couldn't get my sentence transformer to perform better than raw model straight from HuggingFace. I'm not sure what I'm doing wrong. I'm sure there are no bugs in my code since I followed the sentence transformer models almost verbatim.
background on my task: my datasets consists of a list of sentences(legal articles— around …
!-->More from www.reddit.com / Natural Language Processing
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
AI Scientist/Engineer
@ OKX | Singapore
Research Engineering/ Scientist Associate I
@ The University of Texas at Austin | AUSTIN, TX
Senior Data Engineer
@ Algolia | London, England
Fundamental Equities - Vice President, Equity Quant Research Analyst (Income & Value Investment Team)
@ BlackRock | NY7 - 50 Hudson Yards, New York
Snowflake Data Analytics
@ Devoteam | Madrid, Spain