Feb. 6, 2024, 5:53 a.m. | Kexuan Shi Xingyu Zhou Shuhang Gu

cs.CV updates on arXiv.org arxiv.org

Implicit Neural Representation (INR) as a mighty representation paradigm has achieved success in various computer vision tasks recently. Due to the low-frequency bias issue of vanilla multi-layer perceptron (MLP), existing methods have investigated advanced techniques, such as positional encoding and periodic activation function, to improve the accuracy of INR. In this paper, we connect the network training bias with the reparameterization technique and theoretically prove that weight reparameterization could provide us a chance to alleviate the spectral bias of MLP. …

accuracy advanced bias computer computer vision cs.cv encoding fourier function issue layer low mlp paradigm perceptron positional encoding representation success tasks training vision

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer

@ Samsara | Canada - Remote

Machine Learning & Data Engineer - Consultant

@ Arcadis | Bengaluru, Karnataka, India