July 31, 2023, 1:49 p.m. | /u/GuybrushManwood

Natural Language Processing www.reddit.com

Maybe I got something wrong, but this strikes me as a flaw in sentence transformer models:

Cosine similarity ranges from -1 to 1, but when state-of-the-art models like all-mpnet-base-v2 ([https://huggingface.co/sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)) are trained with ContrastiveLoss, don't we artificially constrain cosine similarity to range from 0 to 1?

For example, consider these results:

* "I like cats" : "I hate cats" ~ .74
* "I like cats" : "I love cats" ~ .87
* "I like cats" : "Today is a sunny …

cats example languagetechnology love negative something transformer transformer models transformers

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US