all AI news
Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent
May 2, 2024, 4:42 a.m. | Pingzhi Li, Junyu Liu, Hanrui Wang, Tianlong Chen
cs.LG updates on arXiv.org arxiv.org
Abstract: Optimization techniques in deep learning are predominantly led by first-order gradient methodologies, such as SGD. However, neural network training can greatly benefit from the rapid convergence characteristics of second-order optimization. Newton's GD stands out in this category, by rescaling the gradient using the inverse Hessian. Nevertheless, one of its major bottlenecks is matrix inversion, which is notably time-consuming in $O(N^3)$ time with weak scalability.
Matrix inversion can be translated into solving a series of linear …
abstract arxiv benefit convergence cs.ai cs.lg deep learning gradient however hybrid network network training neural network optimization quant-ph quantum scheduling training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US