March 29, 2024, 4 p.m. | code_your_own_AI

code_your_own_AI www.youtube.com

NEW MoE LLM of a MAMBA (S6) State Space Model with integrated Transformer (self-attention). New LLM released 2 hours ago on HuggingFace.

Databricks DBRX compared to AI21Labs JAMBA (architecture, size, free trainable parameters)

Video includes:
1. JAMBA inference code, plus 8-bit quantization code
2. JAMBA Fine-tuning Python Code w/ SFT trainer from HuggingFace.
3. Performance data of JAMBA vs MIXTRAL in three categories.

#airesearch
#ai
#newtech

architecture attention code databricks dbrx fine-tuning free huggingface inference jamba llm mamba moe open source parameters python quantization self-attention sft space state state space model trainer transformer video

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York