July 13, 2023, 1:49 a.m. | /u/Western-Image7125

Machine Learning www.reddit.com

I understand that encoder only models (like BERT etc) are mainly for learning representations of words taking context on both sides. What I’m confused about is why you would need decoder only vs encoder-decoder models. GPT and Bloom are decoder only while I think T5 is enc-dec, I’m not sure why you would use one vs the other. Intuitively enc-dec model has more parameters and should be better at tasks where you have both complex text input and output, like …

bert bloom context decoder encoder encoder-decoder etc gpt machinelearning think words

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US