April 12, 2024, 1:54 p.m. | /u/synthphreak

Data Science www.reddit.com

I just came upon (what I think is) the original REALM paper, [“Retrieval-Augmented Language Model Pre-Training”](https://arxiv.org/abs/2002.08909). Really interesting idea, but there are some key details that escaped me regarding the role of the retriever. I was hoping someone here could set me straight:

1. **First and most critically, is retrieval-augmentation only relevant for generative models?** You hear a lot about RAG, but couldn’t there also be like RAU? Like in encoding some piece of text X for a downstream non-generative …

augmentation datascience embedding encoder encoding generative generative models information knowledge rag retrieval store text

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote