April 2, 2024, 11:35 p.m. | /u/NixTheFolf

Machine Learning www.reddit.com

The paper authors of "[Logits of API-Protected LLMs Leak Proprietary Information](https://arxiv.org/abs/2403.09539v2)" describe how they figured out and exploited a "softmax bottleneck" when calling on an API-Protected LLM over a ton of API calls, which they then used to get a close estimate that GPT-3.5-Turbo's embedding size of around 4096 ± 512. They then mention how this makes GPT-3.5-Turbo either a 7B dense model (by looking at other models with a known embedding size of \~4096), or a MoE that is …

consensus evidence general gpt gpt-3 gpt-3.5 gpt-3.5-turbo look machinelearning mixtral prediction thinking turbo

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Scientist 3

@ Wyetech | Annapolis Junction, Maryland

Technical Program Manager, Robotics

@ DeepMind | Mountain View, California, US

Machine Learning Engineer

@ Issuu | Braga

Business Intelligence Manager

@ Intuitive | Bengaluru, India

Expert Data Engineer (m/w/d)

@ REWE International Dienstleistungsgesellschaft m.b.H | Wien, Austria