all AI news
Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity
April 10, 2024, 4:43 a.m. | Bowen Li, Bin Shi, Ya-xiang Yuan
cs.LG updates on arXiv.org arxiv.org
Abstract: A significant milestone in modern gradient-based optimization was achieved with the development of Nesterov's accelerated gradient descent (NAG) method. This forward-backward technique has been further advanced with the introduction of its proximal generalization, commonly known as the fast iterative shrinkage-thresholding algorithm (FISTA), which enjoys widespread application in image science and engineering. Nonetheless, it remains unclear whether both NAG and FISTA exhibit linear convergence for strongly convex functions. Remarkably, these algorithms demonstrate convergence without requiring any …
abstract advanced algorithm algorithms arxiv convergence cs.lg cs.na development gradient introduction iterative knowledge linear math.na math.oc modern modulus optimization shrinkage stat.ml thresholding type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US