Oct. 26, 2022, 3:46 a.m. | /u/William_Rosebud

Data Science www.reddit.com

Hi guys,

I just wanted to ask a question to those who have more experience here.

When I got taught about hyperparameter tuning using GridSearchCV (and other SearchCV tools), the focus always was on maximising the score(s) chosen for the model at hand. However when I get test scores of 0.95 or over, I start getting nervous. Am I overfitting the data?

Since GridSearchCV utilises cross validation and all the training is done with data *separate* from the one used …

datascience gridsearchcv hyperparameter sklearn

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne