Web: http://arxiv.org/abs/2209.07932

Sept. 19, 2022, 1:11 a.m. | Paolo Didier Alfano, Vito Paolo Pastore, Lorenzo Rosasco, Francesca Odone

cs.LG updates on arXiv.org arxiv.org

The impressive performances of deep learning architectures is associated to
massive increase of models complexity. Millions of parameters need be tuned,
with training and inference time scaling accordingly. But is massive
fine-tuning necessary? In this paper, focusing on image classification, we
consider a simple transfer learning approach exploiting pretrained
convolutional features as input for a fast kernel method. We refer to this
approach as top-tuning, since only the kernel classifier is trained. By
performing more than 2500 training processes we …

arxiv features fine fine-tuning kernel top transfer transfer learning

More from arxiv.org / cs.LG updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Software Engineer II, Data Pipeline

@ Amplitude | San Francisco, CA

Data Operations Researcher

@ Cognism | Skopje, Greater Skopje, North Macedonia

Data Engineer - Commodities

@ DRW | Chicago and London