all AI news
[R] The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Feb. 28, 2024, 10:03 a.m. | /u/Civil_Collection7267
Machine Learning www.reddit.com
**Abstract**
>Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in …
abstract every fp16 language language models large language large language models llm llms machinelearning precision research the way transformer work
More from www.reddit.com / Machine Learning
[D] Does DSPy actually change the LM weights?
1 day, 1 hour ago |
www.reddit.com
[D] Culture of Recycling Old Conference Submissions in ML
1 day, 3 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US