all AI news
Accelerate Mixtral 8x7B with Speculative Decoding and Quantziation on Amazon SageMaker
April 2, 2024, midnight | schmidphilipp1995@gmail.com (Philipp Schmid)
philschmid blog www.philschmid.de
amazon amazon sagemaker blog decoding generativeai huggingface learn llm mixtral mixtral 8x7b quantization sagemaker will
More from www.philschmid.de / philschmid blog
Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora
1 week, 2 days ago |
www.philschmid.de
Deploy Llama 3 on Amazon SageMaker
1 week, 6 days ago |
www.philschmid.de
Fine-Tune & Evaluate LLMs in 2024 with Amazon SageMaker
1 month, 2 weeks ago |
www.philschmid.de
Evaluate LLMs with Hugging Face Lighteval on Amazon SageMaker
1 month, 3 weeks ago |
www.philschmid.de
RLHF in 2024 with DPO & Hugging Face
3 months, 1 week ago |
www.philschmid.de
How to Fine-Tune LLMs in 2024 with Hugging Face
3 months, 1 week ago |
www.philschmid.de
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Principal, Product Strategy Operations, Cloud Data Analytics
@ Google | Sunnyvale, CA, USA; Austin, TX, USA
Data Scientist - HR BU
@ ServiceNow | Hyderabad, India