Jan. 20, 2022, 2:10 a.m. | Reto Gubelmann, Siegfried Handschuh

cs.LG updates on arXiv.org arxiv.org

In this article, we explore the shallow heuristics used by transformer-based
pre-trained language models (PLMs) that are fine-tuned for natural language
inference (NLI). To do so, we construct or own dataset based on syllogistic,
and we evaluate a number of models' performance on our dataset. We find
evidence that the models rely heavily on certain shallow heuristics, picking up
on symmetries and asymmetries between premise and hypothesis. We suggest that
the lack of generalization observable in our study, which is …

arxiv language language models natural natural language transformer

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Healthcare Data Modeler/Data Architect - REMOTE

@ Perficient | United States

Data Analyst – Sustainability, Green IT

@ H&M Group | Stockholm, Sweden

RWE Data Analyst

@ Sanofi | Hyderabad

Machine Learning Engineer

@ JPMorgan Chase & Co. | Jersey City, NJ, United States