April 26, 2024, noon | code_your_own_AI

code_your_own_AI www.youtube.com

Snowflake offers unique tech insights in new architecture designs of LLM, particular its new 128x3.66B Mixture of Expert (MoE) system. Dedicated, highly specialized LLMs for particular tasks.

Short introduction to MoE and then a comparison between different model architectures, followed up by a causal reasoning test (following test suite published by Stanford Univ).

Can a relatively small LLM, with below eg 5 Billion free trainable parameters, solve complex reasoning tasks. We evaluated this in my last video on PHI-3 MINI. …

architecture architectures causal comparison designs expert insights introduction llm llms moe reasoning snowflake stanford tasks tech tech insights test unique

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote