July 31, 2022, 1:53 p.m. | /u/111llI0__-__0Ill111

Data Science www.reddit.com

Been seeing posts on “why not always use boosting/RFs” etc and one of the things that always comes up is “interpretability” and “inference”.

However one of the basic assumptions of stat inference is that the model is specified correctly, that it captures the data generating process. If the DGP is nonlinear (and it could be if there is no theory supporting the linearity) then the inference would be off.

This example shows a case where using a black box model …

datascience interpretability linear paradox

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne