all AI news
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
May 23, 2023, 4:29 p.m. | Hong Liu, Zhiyuan Li, David Hall, Percy Liang, Tengyu Ma
Blog Content - TOGETHER www.together.xyz
improvement of the optimization algorithm would lead to a material
reduction on the time and cost of training. Adam and its variants have been
state-of-the-art for years, and more sophisticated second-order
(Hessian-based) optimizers often incur too much per-step overhead.
adam algorithm art cost improvement language language model massive material optimization per pre-training research scalable state stochastic training variants
More from www.together.xyz / Blog Content - TOGETHER
Flash-Decoding for long-context inference
6 months, 2 weeks ago |
www.together.xyz
Faster inference enables up to 5x price reduction on Together API
8 months, 2 weeks ago |
www.together.xyz
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
C003549 Data Analyst (NS) - MON 13 May
@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium
Marketing Decision Scientist
@ Meta | Menlo Park, CA | New York City