all AI news
Parallel Deep Neural Networks Have Zero Duality Gap. (arXiv:2110.06482v2 [cs.LG] UPDATED)
June 27, 2022, 1:11 a.m. | Yifei Wang, Tolga Ergen, Mert Pilanci
cs.LG updates on arXiv.org arxiv.org
Training deep neural networks is a well-known highly non-convex problem. In
recent works, it is shown that there is no duality gap for regularized
two-layer neural networks with ReLU activation, which enables global
optimization via convex programs. For multi-layer linear networks with vector
outputs, we formulate convex dual problems and demonstrate that the duality gap
is non-zero for depth three and deeper networks. However, by modifying the deep
networks to more powerful parallel architectures, we show that the duality gap …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
IT Commercial Data Analyst - ESO
@ National Grid | Warwick, GB, CV34 6DA
Stagiaire Data Analyst – Banque Privée - Juillet 2024
@ Rothschild & Co | Paris (Messine-29)
Operations Research Scientist I - Network Optimization Focus
@ CSX | Jacksonville, FL, United States
Machine Learning Operations Engineer
@ Intellectsoft | Baku, Baku, Azerbaijan - Remote
Data Analyst
@ Health Care Service Corporation | Richardson Texas HQ (1001 E. Lookout Drive)