Feb. 5, 2022, 7:26 p.m. | /u/Kualityy

Machine Learning www.reddit.com

I am working on an imbalanced classification problem (~0.2% positive class) using LightGBM and am noticing a significant increase in performance when I use pos_scale_weight < 1 vs pos_scale_weight > 1. I'm talking about a cross-validation PR-AUC of 0.8 vs 0.2 kind of improvement.

I am new to imbalanced class problems and this result is not very intuitive for me at all. I feel like it has something to do with overfitting.

submitted by /u/Kualityy
[link] [comments]

lightgbm machinelearning performance

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne