all AI news
An $l_1$-oracle inequality for the Lasso in mixture-of-experts regression models. (arXiv:2009.10622v3 [math.ST] UPDATED)
Web: http://arxiv.org/abs/2009.10622
May 13, 2022, 1:10 a.m. | TrungTin Nguyen, Hien D Nguyen, Faicel Chamroukhi, Geoffrey J McLachlan
stat.ML updates on arXiv.org arxiv.org
Mixture-of-experts (MoE) models are a popular framework for modeling
heterogeneity in data, for both regression and classification problems in
statistics and machine learning, due to their flexibility and the abundance of
available statistical estimation and model choice tools. Such flexibility comes
from allowing the mixture weights (or gating functions) in the MoE model to
depend on the explanatory variables, along with the experts (or component
densities). This permits the modeling of data arising from more complex data
generating processes when …
More from arxiv.org / stat.ML updates on arXiv.org
Variational Hyper-Encoding Networks. (arXiv:2005.08482v2 [stat.ML] UPDATED)
1 day, 6 hours ago |
arxiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California