April 1, 2024, 4:42 a.m. | Augustina C. Amakor, Konstantin Sonntag, Sebastian Peitz

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.12044v5 Announce Type: replace
Abstract: Sparsity is a highly desired feature in deep neural networks (DNNs) since it ensures numerical efficiency, improves the interpretability of models (due to the smaller number of relevant features), and robustness. For linear models, it is well known that there exists a \emph{regularization path} connecting the sparsest solution in terms of the $\ell^1$ norm, i.e., zero weights and the non-regularized solution. Very recently, there was a first attempt to extend the concept of regularization paths …

abstract arxiv compute cs.ai cs.lg efficiency feature features interpretability linear math.oc networks neural networks numerical path regularization robustness sparsity stat.ml type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India