all AI news
[R] AdamL: A fast adaptive gradient method incorporating loss function
Jan. 10, 2024, 7:07 a.m. | /u/APaperADay
Machine Learning www.reddit.com
**Abstract**:
>Adaptive first-order optimizers are fundamental tools in deep learning, although they may suffer from poor generalization due to the nonuniform gradient scaling. In this work, we propose **AdamL**, a novel variant of the Adam optimizer, that takes into account the loss function information to attain better generalization results. We provide sufficient conditions that together with the Polyak-Lojasiewicz inequality, ensure the linear convergence of AdamL. As a byproduct of our analysis, we prove similar convergence properties for the …
abstract adam deep learning function gradient inequality information loss machinelearning novel scaling together tools work
More from www.reddit.com / Machine Learning
[D] Does DSPy actually change the LM weights?
1 day, 2 hours ago |
www.reddit.com
[D] Culture of Recycling Old Conference Submissions in ML
1 day, 5 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US