Jan. 4, 2022, 9:10 p.m. | Vassilina Nikoulina, Maxat Tezekbayev, Nuradil Kozhakhmet, Madina Babazhanova, Matthias Gallé, Zhenisbek Assylbekov

cs.CL updates on arXiv.org arxiv.org

There is an ongoing debate in the NLP community whether modern language
models contain linguistic knowledge, recovered through so-called probes. In
this paper, we study whether linguistic knowledge is a necessary condition for
the good performance of modern language models, which we call the
\textit{rediscovery hypothesis}. In the first place, we show that language
models that are significantly compressed but perform well on their pretraining
objectives retain good scores when probed for linguistic structures. This
result supports the rediscovery hypothesis …

arxiv hypothesis language language models linguistics

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Science Specialist

@ Telstra | Telstra ICC Bengaluru

Senior Staff Engineer, Machine Learning

@ Nagarro | Remote, India