Aug. 27, 2023, 9:35 p.m. | /u/9302462

Machine Learning www.reddit.com

TLDR: Is it ok to use two 4070ti's in a machine if all you need is more cuda cores to create embeddings and don't care about memory capacity, i.e. not for LLM's

**Background**

I have 20tb of text data (size in mongo) and 80tb of images (stored at 800x600-800) on my homelab on ssd's which i'm in the process of vectorizing and creating embeddings for. I have a 3090 with two python scripts, each script does the same thing, fetches …

capacity cuda data embeddings gpu llm machine machinelearning machines memory multiple per text vectorize

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US