Web: http://arxiv.org/abs/2109.03099

Jan. 31, 2022, 2:11 a.m. | Vân Anh Huynh-Thu, Pierre Geurts

cs.LG updates on arXiv.org arxiv.org

This paper presents a model-agnostic ensemble approach for supervised
learning. The proposed approach is based on a parametric version of Random
Subspace, in which each base model is learned from a feature subset sampled
according to a Bernoulli distribution. Parameter optimization is performed
using gradient descent and is rendered tractable by using an importance
sampling approach that circumvents frequent re-training of the base models
after each gradient descent step. While the degree of randomization is
controlled by a hyper-parameter in …

arxiv model random

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Engineer

@ DAZN | Hammersmith, London, United Kingdom

Sr. Data Engineer, Growth

@ Netflix | Remote, United States

Data Engineer - Remote

@ Craft | Wrocław, Lower Silesian Voivodeship, Poland

Manager, Operations Data Science

@ Binance.US | Vancouver

Senior Machine Learning Researcher for Copilot

@ GitHub | Remote - Europe

Sr. Marketing Data Analyst

@ HoneyBook | San Francisco, CA