May 25, 2022, 1:12 a.m. | Dmitry Nikolaev, Sebastian Padó

cs.CL updates on arXiv.org arxiv.org

The capabilities and limitations of BERT and similar models are still unclear
when it comes to learning syntactic abstractions, in particular across
languages. In this paper, we use the task of subordinate-clause detection
within and across languages to probe these properties. We show that this task
is deceptively simple, with easy gains offset by a long tail of harder cases,
and that BERT's zero-shot performance is dominated by word-order effects,
mirroring the SVO/VSO/SOV typology.

arxiv bert case case study detection study

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analytics & Insight Specialist, Customer Success

@ Fortinet | Ottawa, ON, Canada

Account Director, ChatGPT Enterprise - Majors

@ OpenAI | Remote - Paris