April 2, 2024, 11:35 p.m. | /u/NixTheFolf

Machine Learning www.reddit.com

The paper authors of "[Logits of API-Protected LLMs Leak Proprietary Information](https://arxiv.org/abs/2403.09539v2)" describe how they figured out and exploited a "softmax bottleneck" when calling on an API-Protected LLM over a ton of API calls, which they then used to get a close estimate that GPT-3.5-Turbo's embedding size of around 4096 ± 512. They then mention how this makes GPT-3.5-Turbo either a 7B dense model (by looking at other models with a known embedding size of \~4096), or a MoE that is …

consensus evidence general gpt gpt-3 gpt-3.5 gpt-3.5-turbo look machinelearning mixtral prediction thinking turbo

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US