April 9, 2024, 4:50 a.m. | Poulami Ghosh, Shikhar Vashishth, Raj Dabre, Pushpak Bhattacharyya

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.04530v1 Announce Type: new
Abstract: How does the importance of positional encoding in pre-trained language models (PLMs) vary across languages with different morphological complexity? In this paper, we offer the first study addressing this question, encompassing 23 morphologically diverse languages and 5 different downstream tasks. We choose two categories of tasks: syntactic tasks (part-of-speech tagging, named entity recognition, dependency parsing) and semantic tasks (natural language inference, paraphrasing). We consider language-specific BERT models trained on monolingual corpus for our investigation. The …

abstract arxiv complexity cs.cl diverse encoding importance investigation language language models languages paper part part-of-speech positional encoding question speech study tasks type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston