Feb. 8, 2024, 1:35 a.m. | Synced

Synced syncedreview.com

In a new paper Nomic Embed: Training a Reproducible Long Context Text Embedder, a Nomic AI research team introduces nomic-embed-text-v1, which marks the inception of the first fully reproducible, open-source, open-weights, open-data text embedding model, capable of handling an extensive context length of 8192 in English.


The post Nomic Embed: The Inaugural Open-Source Long Text Embedding Model Outshining OpenAI’s Finest first appeared on Synced.

ai ai research artificial intelligence context data deep-neural-networks embed embedding english machine learning machine learning & data science marks ml natural language processing nature language tech openai open-data paper research research team team technology text text embedding training

More from syncedreview.com / Synced

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA