April 1, 2024, noon | code_your_own_AI

code_your_own_AI www.youtube.com

Current open source LLMs need between 320 GB (for Databricks DBRX 16-bit) down to 100GB (new JAMBA LLM) either VRAM or shared memory between CPU, GPU and NPU (AI accelerator).

Current consumer NVIDIA GPU top out at 24GB (4090), and 40GB A100 are financially out of scope for customer implementing their LLM, trained on their company data on-site.

Now a new market segment shows a response: Intel announced a new AI PC in the last days, therefore we analyze technical …

16-bit a100 accelerator ai accelerator ai pc consumer cpu current customer databricks dbrx good gpu investors jamba llm llms memory npu nvidia nvidia gpu open source open source llms

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Developer AI Senior Staff Engineer, Machine Learning

@ Google | Sunnyvale, CA, USA; New York City, USA

Engineer* Cloud & Data Operations (f/m/d)

@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183