Web: http://arxiv.org/abs/2207.09638

Sept. 20, 2022, 1:14 a.m. | Yi Yang, Chen Zhang, Benyou Wang, Dawei Song

cs.CL updates on arXiv.org arxiv.org

Over-parameterized models, typically pretrained language models (LMs), have
shown an appealing expressive power due to their small learning bias. However,
the huge learning capacity of LMs can also lead to large learning variance. In
a pilot study, we find that, when faced with multiple domains, a critical
portion of parameters behave unexpectedly in a domain-specific manner while
others behave in a domain-general one. Motivated by this phenomenon, we for the
first time posit that domain-general parameters can underpin a domain-general …

arxiv general language language models playing

More from arxiv.org / cs.CL updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France