April 16, 2024, 8:17 a.m. | /u/Mad_Scientist2027

Machine Learning www.reddit.com

I had been training some Swin transformers using SimMIM for a paper and noticed that the linear probing accuracy on ImageNet1k was horrendous. While I was using the smallest Swin model, Swin-T, the performance after the 25th epoch was barely 2.5% top1 (ViT-T attains \~5% top1 while ViT-B does 7% after a similar number of epochs).

I wanted to know if someone had done similar experimentation with Swin transformers and if using them in linear evaluation is a lost cause. …

accuracy linear machinelearning paper performance swin training transformers vit

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA