March 18, 2024, 4:42 a.m. | Sajad Faramarzi, Farzan Haddadi, Sajjad Amini, Masoud Ahookhosh

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.10232v1 Announce Type: cross
Abstract: Conventional matrix completion methods approximate the missing values by assuming the matrix to be low-rank, which leads to a linear approximation of missing values. It has been shown that enhanced performance could be attained by using nonlinear estimators such as deep neural networks. Deep fully connected neural networks (FCNNs), one of the most suitable architectures for matrix completion, suffer from over-fitting due to their high capacity, which leads to low generalizability. In this paper, we …

abstract approximation arxiv cs.it cs.lg leads linear low math.it matrix missing values networks neural networks performance regularization the matrix type values via

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne