Jan. 30, 2024, 7:19 a.m. | Andrey Vasnetsov

DEV Community dev.to

Ever since the data science community discovered that vector search significantly improves LLM answers, various vendors and enthusiasts have been arguing over the proper solutions to store embeddings.


Some say storing them in a specialized engine (aka vector database) is better. Others say that it’s enough to use plugins for existing databases.


Here are just a few of them.


This article presents our vision and arguments on the topic. We will:



  • Explain why and when you need a dedicated …

ai community data database databases data science embeddings llm machinelearning plugins science search service solutions store them vector vector database vectordatabase vector search vendors

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne