March 26, 2022, 5:15 a.m. | /u/No_Coffee_4638

Artificial Intelligence www.reddit.com

When making zero-shot inference, large pre-trained models like CLIP or ALIGN provide consistent accuracy across various data distributions (i.e., without fine-tuning on a specific dataset). While existing fine-tuning methods vastly improve accuracy on a given target distribution, they frequently compromise robustness to distribution shifts. This conflict can be resolved by presenting a simple and effective strategy for enhancing robustness while fine-tuning: assembling the zero-shot and fine-tuned models (WiSE-FT). 

An approach for fine-tuning AI models that enhance robustness during distribution shift …

ai ai models algorithm artificial researchers

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

IT Commercial Data Analyst - ESO

@ National Grid | Warwick, GB, CV34 6DA

Stagiaire Data Analyst – Banque Privée - Juillet 2024

@ Rothschild & Co | Paris (Messine-29)

Operations Research Scientist I - Network Optimization Focus

@ CSX | Jacksonville, FL, United States

Machine Learning Operations Engineer

@ Intellectsoft | Baku, Baku, Azerbaijan - Remote

Data Analyst

@ Health Care Service Corporation | Richardson Texas HQ (1001 E. Lookout Drive)