all AI news
Training neural networks using monotone variational inequality. (arXiv:2202.08876v1 [stat.ML])
Feb. 21, 2022, 2:10 a.m. | Chen Xu, Xiuyuan Cheng, Yao Xie
cs.LG updates on arXiv.org arxiv.org
Despite the vast empirical success of neural networks, theoretical
understanding of the training procedures remains limited, especially in
providing performance guarantees of testing performance due to the non-convex
nature of the optimization problem. Inspired by a recent work of (Juditsky &
Nemirovsky, 2019), instead of using the traditional loss function minimization
approach, we reduce the training of the network parameters to another problem
with convex structure -- to solve a monotone variational inequality (MVI). The
solution to MVI can be …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 7 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 7 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Lead Software Engineer - Artificial Intelligence, LLM
@ OpenText | Hyderabad, TG, IN
Lead Software Engineer- Python Data Engineer
@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom
Data Analyst (m/w/d)
@ Collaboration Betters The World | Berlin, Germany
Data Engineer, Quality Assurance
@ Informa Group Plc. | Boulder, CO, United States
Director, Data Science - Marketing
@ Dropbox | Remote - Canada