Web: http://arxiv.org/abs/2204.03276

Sept. 16, 2022, 1:12 a.m. | Nikita Balagansky, Daniil Gavrilov

cs.LG updates on arXiv.org arxiv.org

Currently, pre-trained models can be considered the default choice for a wide
range of NLP tasks. Despite their SoTA results, there is practical evidence
that these models may require a different number of computing layers for
different input sequences, since evaluating all layers leads to overconfidence
in wrong predictions (namely overthinking). This problem can potentially be
solved by implementing adaptive computation time approaches, which were first
designed to improve inference speed. Recently proposed PonderNet may be a
promising solution for …

arxiv teaching

More from arxiv.org / cs.LG updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Software Engineer II, Data Pipeline

@ Amplitude | San Francisco, CA

Data Operations Researcher

@ Cognism | Skopje, Greater Skopje, North Macedonia

Data Engineer - Commodities

@ DRW | Chicago and London