Sept. 15, 2022, 12:35 p.m. | /u/RaptorDotCpp

Machine Learning www.reddit.com

When we take a pre-trained network, e.g., ResNet50 on ImageNet, and want to apply it to a new dataset, what we typically do is:

1. Freeze the backbone, but keep the classifier trainable
2. Train until convergence
3. Unfreeze the backbone and train with a low learning rate until convergence

However, I noticed that when we freeze a network with batch normalization layers, the following parameters are still being updated because the batch normalization layers are in training mode: `running_mean`, …

machinelearning networks normalization

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Applied Scientist, Control Stack, AWS Center for Quantum Computing

@ Amazon.com | Pasadena, California, USA

Specialist Marketing with focus on ADAS/AD f/m/d

@ AVL | Graz, AT

Machine Learning Engineer, PhD Intern

@ Instacart | United States - Remote

Supervisor, Breast Imaging, Prostate Center, Ultrasound

@ University Health Network | Toronto, ON, Canada

Senior Manager of Data Science (Recommendation Science)

@ NBCUniversal | New York, NEW YORK, United States