Web: http://arxiv.org/abs/2209.07522

Sept. 16, 2022, 1:12 a.m. | Yossi Gandelsman, Yu Sun, Xinlei Chen, Alexei A. Efros

cs.LG updates on arXiv.org arxiv.org

Test-time training adapts to a new test distribution on the fly by optimizing
a model for each test input using self-supervision. In this paper, we use
masked autoencoders for this one-sample learning problem. Empirically, our
simple method improves generalization on many visual benchmarks for
distribution shifts. Theoretically, we characterize this improvement in terms
of the bias-variance trade-off.

arxiv test training

More from arxiv.org / cs.LG updates on arXiv.org

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Machine Learning Engineers, Confluence Cloud

@ Atlassian | Mountain View, United States

Staff Data Engineer

@ Clio | Remote-US

Data Scientist (Analytics) - Singapore

@ Momos | Singapore, Central, Singapore

Machine Learning Scientist, Drug Discovery

@ Flagship Pioneering, Inc. | Cambridge, MA