Feb. 29, 2024, 11:40 a.m. | /u/BlueOrangeBerries

Machine Learning www.reddit.com

Earlier last year around the release of GPT 4 I read about people doing dimensionality reduction on their vector embeddings, specifically principle component analysis, to make them better suited for RAG.



Since then as the RAG scene has developed I haven’t seen much mention of doing this.


Could anyone shed light on the merits of using dimensionality reduction for RAG?

analysis dimensionality embeddings gpt light machinelearning people rag release them vector vector embeddings

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US