all AI news
Random Weight Factorization Improves the Training of Continuous Neural Representations. (arXiv:2210.01274v2 [cs.LG] UPDATED)
Oct. 6, 2022, 1:13 a.m. | Sifan Wang, Hanwen Wang, Jacob H. Seidman, Paris Perdikaris
cs.LG updates on arXiv.org arxiv.org
Continuous neural representations have recently emerged as a powerful and
flexible alternative to classical discretized representations of signals.
However, training them to capture fine details in multi-scale signals is
difficult and computationally expensive. Here we propose random weight
factorization as a simple drop-in replacement for parameterizing and
initializing conventional linear layers in coordinate-based multi-layer
perceptrons (MLPs) that significantly accelerates and improves their training.
We show how this factorization alters the underlying loss landscape and
effectively enables each neuron in the …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Science Specialist
@ Telstra | Telstra ICC Bengaluru
Senior Staff Engineer, Machine Learning
@ Nagarro | Remote, India