April 2, 2024, 11:35 p.m. | /u/NixTheFolf

Machine Learning www.reddit.com

The paper authors of "[Logits of API-Protected LLMs Leak Proprietary Information](https://arxiv.org/abs/2403.09539v2)" describe how they figured out and exploited a "softmax bottleneck" when calling on an API-Protected LLM over a ton of API calls, which they then used to get a close estimate that GPT-3.5-Turbo's embedding size of around 4096 ± 512. They then mention how this makes GPT-3.5-Turbo either a 7B dense model (by looking at other models with a known embedding size of \~4096), or a MoE that is …

consensus evidence general gpt gpt-3 gpt-3.5 gpt-3.5-turbo look machinelearning mixtral prediction thinking turbo

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence