Oct. 4, 2023, 11:33 p.m. | Roland Meertens

InfoQ - AI, ML & Data Engineering www.infoq.com

At the recent QCon San Francisco conference, Sam Partee, Principal Engineer at Redis, gave a talk about Retrieval Augmented Generation (RAG). He discussed Generative Search, which combines large language models (LLMs) with vector databases to improve information retrieval. Partee discussed several innovative tricks such as Hypothetical Document Embeddings (HyDE), and semantic caching. 

By Roland Meertens

advice ai conference databases engineer generative generative search information language language models large language large language models llms ml & data engineering practical qcon qcon san francisco 2023 rag redis retrieval retrieval augmented generation sam san francisco search search engine talk tricks vector vector databases

More from www.infoq.com / InfoQ - AI, ML & Data Engineering

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV