Jan. 6, 2022, 2:10 a.m. | Feihu Huang, Heng Huang

cs.LG updates on arXiv.org arxiv.org

In the paper, we propose a class of faster adaptive Gradient Descent Ascent
(GDA) methods for solving the nonconvex-strongly-concave minimax problems based
on unified adaptive matrices, which include almost existing coordinate-wise and
global adaptive learning rates. Specifically, we propose a fast Adaptive
Gradient Decent Ascent (AdaGDA) method based on the basic momentum technique,
which reaches a lower gradient complexity of $O(\kappa^4\epsilon^{-4})$ for
finding an $\epsilon$-stationary point without large batches, which improves
the results of the existing adaptive GDA methods by …

arxiv gradient math minimax optimization

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru