April 26, 2024, 4:43 a.m. | Pablo Bermejo, Borja Aizpurua, Roman Orus

cs.LG updates on arXiv.org arxiv.org

arXiv:2304.06768v2 Announce Type: replace-cross
Abstract: Machine learning algorithms, both in their classical and quantum versions, heavily rely on optimization algorithms based on gradients, such as gradient descent and alike. The overall performance is dependent on the appearance of local minima and barren plateaus, which slow-down calculations and lead to non-optimal solutions. In practice, this results in dramatic computational and energy costs for AI applications. In this paper we introduce a generic strategy to accelerate and improve the overall performance of …

abstract algorithms applications arxiv cs.lg gradient improving machine machine learning machine learning algorithms optimization performance quant-ph quantum type versions via

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence