Jan. 13, 2022, 2:10 a.m. | Aarne Talman, Marianna Apidianaki, Stergios Chatzikyriakidis, Jörg Tiedemann

cs.CL updates on arXiv.org arxiv.org

A central question in natural language understanding (NLU) research is
whether high performance demonstrates the models' strong reasoning
capabilities. We present an extensive series of controlled experiments where
pre-trained language models are exposed to data that have undergone specific
corruption transformations. The transformations involve removing instances of
specific word classes and often lead to non-sensical sentences. Our results
show that performance remains high for most GLUE tasks when the models are
fine-tuned or tested on corrupted data, suggesting that the …

arxiv data datasets language natural natural language study

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC