Jan. 17, 2024, 2:34 p.m. | /u/kekkimo

Machine Learning www.reddit.com

Is the embedding matrix sizeable compared to the other components of the transformer?

If not, then why GPT models are relying on a 30K vocab size?

components embedding gpt gpt models llms machinelearning matrix textual transformer

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US