Web: http://arxiv.org/abs/2206.08899

June 20, 2022, 1:11 a.m. | Guy Blanc, Jane Lange, Ali Malik, Li-Yang Tan

cs.LG updates on arXiv.org arxiv.org

Using the framework of boosting, we prove that all impurity-based decision
tree learning algorithms, including the classic ID3, C4.5, and CART, are highly
noise tolerant. Our guarantees hold under the strongest noise model of nasty
noise, and we provide near-matching upper and lower bounds on the allowable
noise rate. We further show that these algorithms, which are simple and have
long been central to everyday machine learning, enjoy provable guarantees in
the noisy setting that are unmatched by existing algorithms …

algorithms arxiv decision lg noise popular tree

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY