Feb. 23, 2024, 5:41 a.m. | Rares-Darius Buhai, Jingqiu Ding, Stefan Tiegel

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14103v1 Announce Type: new
Abstract: We study computational-statistical gaps for improper learning in sparse linear regression. More specifically, given $n$ samples from a $k$-sparse linear model in dimension $d$, we ask what is the minimum sample complexity to efficiently (in time polynomial in $d$, $k$, and $n$) find a potentially dense estimate for the regression vector that achieves non-trivial prediction error on the $n$ samples. Information-theoretically this can be achieved using $\Theta(k \log (d/k))$ samples. Yet, despite its prominence in …

abstract arxiv complexity computational cs.cc cs.lg linear linear model linear regression math.st polynomial regression sample samples statistical stat.ml stat.th study type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA