Web: http://arxiv.org/abs/2206.07904

June 17, 2022, 1:10 a.m. | Siwen Yan, Sriraam Natarajan, Saket Joshi, Roni Khardon, Prasad Tadepalli

cs.LG updates on arXiv.org arxiv.org

Ensemble models (bagging and gradient-boosting) of relational decision trees
have proved to be one of the most effective learning methods in the area of
probabilistic logic models (PLMs). While effective, they lose one of the most
important aspect of PLMs -- interpretability. In this paper we consider the
problem of compressing a large set of learned trees into a single explainable
model. To this effect, we propose CoTE -- Compression of Tree Ensembles -- that
produces a single small decision …

arxiv compression lg models tree

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY