all AI news
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods. (arXiv:2101.03419v3 [cs.LG] UPDATED)
Aug. 10, 2022, 1:11 a.m. | Shiyu Duan, Jose C. Principe
stat.ML updates on arXiv.org arxiv.org
This tutorial paper surveys provably optimal alternatives to end-to-end
backpropagation (E2EBP) -- the de facto standard for training deep
architectures. Modular training refers to strictly local training without both
the forward and the backward pass, i.e., dividing a deep architecture into
several nonoverlapping modules and training them separately without any
end-to-end operation. Between the fully global E2EBP and the strictly local
modular training, there are weakly modular hybrids performing training without
the backward pass only. These alternatives can match or …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Analytics Engineer
@ CircleCI | Remote (US), Remote (Canada), San Francisco, Denver
Bilingual Executive Assistant/Data Analyst - (French and English) - Export
@ Dangote Group | Lagos, Lagos, Nigeria
Workday Services Data Lead
@ WPP | Mexico City, Mexico
Business Data Analyst
@ Nordea | Tallinn, EE, 11415
Data Integrity Lead
@ BioNTech SE | Gaithersburg, MD, US, MD 20878