all AI news
[D] Why transformers are not trained layer-wise?
April 25, 2024, 2:16 p.m. | /u/kiockete
Machine Learning www.reddit.com
ProjectionAndCost(X + L1(X) + L2(X + L1(X)) + L3(X + L1(X) + L2(X + L1(X))) ...)
Since the input to ProjectionAndCost is just sum of outputs from all layers and initial embeddings then the gradient that comes to the layer L1 is the same as the gradient that comes to L2 or L3.
So …
block example gradient layer machinelearning path residual sum transformer transformers wise
More from www.reddit.com / Machine Learning
[D] ECCV 2024 Review Discussion
23 hours ago |
www.reddit.com
[D] Is it a good idea for a 3rd year PhD student to start a …
1 day, 1 hour ago |
www.reddit.com
[D] Use VQ-VAEs for SSL?
1 day, 2 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US