Jan. 20, 2022, 2:10 a.m. | Abhishek Ghose, Balaraman Ravindran

cs.LG updates on arXiv.org arxiv.org

As Machine Learning (ML) becomes pervasive in various real world systems, the
need for models to be understandable has increased. We focus on
interpretability, noting that models often need to be constrained in size for
them to be considered interpretable, e.g., a decision tree of depth 5 is easier
to interpret than one of depth 50. But smaller models also tend to have high
bias. This suggests a trade-off between interpretability and accuracy. We
propose a model agnostic technique to …

arxiv learning oracle

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States