Nov. 15, 2022, 2:14 a.m. | Nathan Wycoff, Ali Arab, Katharine M. Donato, Lisa O. Singh

stat.ML updates on arXiv.org arxiv.org

Modern statistical learning algorithms are capable of amazing flexibility,
but struggle with interpretability. One possible solution is sparsity: making
inference such that many of the parameters are estimated as being identically
0, which may be imposed through the use of nonsmooth penalties such as the
$\ell_1$ penalty. However, the $\ell_1$ penalty introduces significant bias
when high sparsity is desired. In this article, we retain the $\ell_1$ penalty,
but define learnable penalty weights $\lambda_p$ endowed with hyperpriors. We
start the article …

arxiv bayesian lasso

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv